Reading “How Deep Learning Networks Can Use Virtual Worlds To Solve Real World Problems” and “Artificial Intelligence Can Now Design Realistic Video and Game Imagery” got me to thinking about how we can use AI to improve synthetic imagery. Rather than simply “create high-quality videos or images from low-resolution ones”, instead use AI to tunnel through the uncanny valley. Allow the algorithm to learn the difference between a synthetic (virtual) world and a real one and automatically fill in the gaps as needed. Providing this as a configurable step in the graphics pipeline may provide for never-seen-before photo-realistic worlds.
Some interesting academic work in this area:
-
Nalbach, Oliver, et al. “Deep Shading: Convolutional Neural Networks for Screen-Space Shading.” arXiv preprint arXiv:1603.06078 (2016).
-
Zhu, Jun-Yan, et al. “Learning a Discriminative Model for the Perception of Realism in Composite Images.” Proceedings of the IEEE International Conference on Computer Vision. 2015.
-
Johnson, Micah K., et al. “CG2Real: Improving the realism of computer generated images using a large collection of photographs.” Visualization and Computer Graphics, IEEE Transactions on 17.9 (2011): 1273-1285.