This is an idea I’ve been toying with for a bit. There is a ton of media that includes unimportant information that doesn’t need to be stored pixel perfect. Storing large portions of the image data as text will save substantial amounts of storage, and as the reality of on-device image generation becoming commonplace sets in digital memories will become the main way people capture the world around them. I think this will inevitably be the next form of media capture (photography and video), not replacing other methods/ formats, but I could see things like phone cameras having saving images as digital memories set to default to save on storage.
Which is why I wanted to include video in my concept because video file sizes are getting out of control.
As a way to store information it’s really overly complicated and comes with all the downsides of human memory.
As a way to explain how imperfect human memory is or as a way to add deliberate “memory” decay to an artificial intelligence however it could be useful.
That makes even less sense, the CPU/GPU usage would be insane, and if used in large scale, would quickly get up there with crypto mining in terms of energy use, and that is already a big problem for the environment.
Storage of large files on the other hand needs relatively little energy to keep on a harddrive.
Imo video sizes will eventually plateau, there is only so much resolution people actually need. There is so little difference from a distance. From 5m I can’t really tell much difference between 4K and 1080p content on a 50 inch tv. Not to mention resolutions above that
Are they? Video compresses really well these days. How does replacing real footage with generated content that cannot be accurate better than accurate video?
Video file sizes are actually getting smaller all the time, but when filming we don’t save a neatly compressed video file. On-the-fly compression and encoding would help a ton in reducing camera video files, but is very expensive at the moment CPU-wise.