Can LLM be used as a time capsule?

Midjorney: Cyborg human holding a stone tablet with hieroglyphic carvings...
Midjorney. Cyborg human holding a stone tablet with hieroglyphic carvings...

I have been working with LLMs for some time, and while we have yet even to skim the surface of what it can do, I have a philosophical question. Let's assume humanity wants to preserve its knowledge in case we get wiped out by one of a million possible reasons or just for the future of humans. Can we use an LLM (Large Language Model, think Chat GPT) as a time capsule? There are a couple of things we need to consider.

Storage

Preserving information long-term is challenging. I do not mean decades (even if that is not simple), but eons. Without proper and active care, all materials we currently use to preserve information would not last a thousand years. Alternatively, it can be easily destroyed by simple fire (Library of Alexandria) or environmental changes like global warming (Svalbard Global Seed Vault). Most of what we know about the past is either in-ground stone deposits or in the form of light traveling to our telescopes since the Big Bang.

So, long-term storage has to be done less conveniently than just copying something to a disk on an anonymous server of one of the cloud providers. Whatever the way it is, we would want to minimize the size of the artifact somehow.

Information retrieval

Another thing is how you retrieve/work with the data. If you are a researcher of, say, Ancient Rome, you would spend thousands of hours reading hundreds of thousands of pages of source documents. Those documents would be in a language nobody speaks anymore (languages evolve a lot and die fast). LLMs are made for translation (paper that describes transformer architecture), so fine-tuning an LLM "time capsule" to speak the contemporary language would be a piece of cake.

Size

LLM is not only capable of answering questions, but it also compresses data. According to "gold standard" research, the optimal ratio of training data to the model's size is 20:1. So I dare simplify it as 20x compression (this is a questionable simplification, but I am not trying to make a scientific statement).


There is still the problem of LLM hallucinations and the fact that the data LLM produces is probabilistic in nature. However, imagine if, instead of reading a book on ancient times, you would be able to ask an LLM that captured most of the relevant information from that era. I would spend days prompting LLM from the past.

I do not suggest carving all those parameters on stone tablets 😀, so please treat this as a thought experiment. Maybe thinking about this will give you another perspective on life or at least on the application of LLMs.

Please share your thoughts, I am eager to learn!