Your skepticism is well founded. Statistics don't equal causality, but you can add causal assumption to generate greater evidence. There are usually three main assumptions in the process that are not possible in all contexts, so in those extra considerations are used. Exchageability (i.e., unconfoundness or conditional unconfoundness), no interference (i.e., intervention doesn't effect the outcome of the other group), and a well defined intervention. Other things can be incorporated as well, I usually think about the Bradford-Hill Criteria (temporarilty, dose response, biological plausibility, consistency, if possible exclusivity, etc.) and contagion as well as sensitivity analyses. If these things are confirmable or reasonable, then the results of the model can be assumed possibly causal. In addition, a model is just an approximation to the real data generating process, so the model also has to be correct in the above scenario.
I am not an economics person, but they try to get closer to the randomization paradigm, via instrumental variables and things like regression discontinuity. Though, unless the data are generated in a vacuum in a laboratory with treatment assignment controlled, you never truly have confirmed causality. Though, most experts in causal inference will agree that you can make causal claims if you are able to meet most of the causal assumptions.
Does this help?