Don't round it. Rounding is the death of intro statistics students....heck, you can round .046 to 0 too if you want to round to the nearest tenth!
At a 0.05 significance level, it is technically significant, but it is very borderline. There's definitely some gray area here. If this was an experiment, it may be an indication that another test should be done. Maybe with a larger sample size or at a different significance level (.01?)
I don't see anything giantly wrong with round to 2 dp in general (Jacob Cohen used to love ranting about how we use far too many dp's in psychology to make ourselves look smarter than we are ), but when, at a given rounding level, the p value equals your alpha level, it's probably best to clarify whether the actual p value is smaller or larger than the cutoff by showing a few more dp's.
That said, a dataset with a p value of 0.046 for a given effect is only very marginally different to one with a p value of 0.054, say - showing as usual how a simple accept/reject cutoff isn't very good at modelling reality. As gambs suggest, some kind of extra validation such as using a different sample might be a good idea - and don't forget to report the actual size of the effect
I agree - anything borderline would be grounds to recheck for things like how the assumptions of the test you used were met, and the presence of any outliers in the data that could explain processes at work other than what you intended on measuring. On the other hand you could just accept that it is technically significant....it just depends on how prudent you need to be