Yes AR is doable and has been done before as experiments, but it is not part yet of the main integration
Though for successful AR you need much more there would be no light reaction to the changing environment this is future stuff, also no target of VAM.
Most Scene would just look so odd in AR depending on the Environment you blend it in, especially a lot of the cartoon like characters.
The current light model is not stable in any way for AR and there are not many who Design with this in Goal at all
AR in this complexity as VR in VAM will take still years to be efficient enough compared to a simulated environment.
There is a reason why most VAM scene are inside the lower Luminance Perception field.
VAM is buildup entirely on keeping resources low by cheating your feelings, and it works, it shows that we don't care so much what we see but more how it feels.
But of course the more we bring both together the even more immersive it becomes, and we are going straight into the experiences you know from SCIFI like the absolute masterful TEKWAR with it.
But VAM concentrates first and foremost on the Simulation part of complexity in its entire spectrum.
It's already complex adding the complexity of blending efficiently into AR in Realtime is so heavy look how far we are with Raytracing we are slowly getting there but bringing it all into AR this is so far away, especially for end consumer.
Though with advances in Machine Learning we got so much faster solving complex things, I'm not sure if it's really that far away anymore.
But overall we have much bigger problems to solve in our society and the future looks very dim again, and who knows if there will be a future anymore if we don't gather resources for more important problem solving fast.
The Current Generation was feelings wise, never closer to extinction than ever before.