AI-augmented “Reading” — Many-to-One

In Reader-augmented Writing, I present a way to think about how communication has evolved; how written language, the printing press, and digital media have allowed more people to reach more people with ideas (one-to-many), with readers adding to what authors wrote (many-to-many).

Now, with the recent significant developments of large language models (LLMs), I’m adding to the fork to that evolution: AI-augmented, many-to-one “Reading.”

Before LLMs: Some/Many -to- Many

Digital media, such as the web and e-books lowered the barrier to produce content with a wide reach. Nearly anybody — the first “many” — could publish digital content that’s available for everyone (in practice, most content is lost in the vastness of content, and only “some” actually reach many). Before that, wide reach was limited to those with access to widely-distributed physical publishing, such as newspapers or books. Further, a lot of content is relatively static; “write once, read many.” Once an article is published or video uploaded to YouTube, only few will ever be updated in-place. They only contain ideas up to the point of publishing, and only the idea of the author(s).

That’s where ‘Reader-augmented Writing’ comes into play. It makes it possible for additional people to bootstrap their content — their insights, ideas, and perspectives — onto more widely-reaching content. That helps more people to reach more readers.

However, that contributes to a different problem…

As readers, we often have too much content available to us

Worse, that content is often scattered and difficult to discover (even with the help of search engines). LLMs bring a new dynamic to the content production and consumption model…

AI-augmented “Reading” — Many-to-One

One of the main things that LLMs are doing for us is digesting content; often distilling dozens or hundreds of pages worth of content from vast, scattered resources into a couple sentences or paragraphs. Hence, “many-to-one.” There even seems to be evidence that LLMs are helping surface content that previously did not have wide reach; for better and worse! For example (of worse, especially), surfacing obscure singular tidbits, such as a comment buried in a discussion about adding glue to cheese on pizza, and an unnoteworthy satirical article recommending eating at least one small rock per day.

Such distilled content are also mostly — currently — generated for a single user and just for that moment. Or even if for multiple users, it still often should/needs to update as additional relevant content is added/discovered. Hence, it’s generally also ephemeral.

 SourcesReachDynamics & SpeedDurability
Spoken LanguageOneFewStatic; slow disseminationEphemeral
Written LanguageOneMany, small scaleStatic; slow disseminationAcross Time (duration depends on medium)
Printing PressOneMany, large scaleStatic; medium-fast disseminationAcross Time (duration depends on medium)
Digital Content (Web and e-books)SomeMany, large scaleMedium: Updatable, sometimes; very fast disseminationAcross Time (easy to archive)
Augmented & Collaborative Books and ArticlesManyManyHigh dynamics: Evolves with contributions; fast dissemination (often real-time)Across Time
🆕 Large Language ModelsMany, aggregatedOneHigh dynamics: Can surface different content with follow-up; fast (as quickly as the system picks up new content)Ephemeral
LLM generated content could then be published (as could any other class of content), giving it reach to “many” and making it durable.

LLMs can do much more than just digest content, of course, but that’s out of the scope of this framework.

My Take

We all know what it’s like having to sift through a handful of search results to find out which one has an answer to our question. Similarly, I often find it onerous to read through articles within a “Support” site to see which one has the answer to my question. So, I’m glad for LLMs here; helping with that sifting, making it easier to get what we’re looking for within a mountain of content.

It can also expose us to a broader set of ideas and perspectives more quickly and easily. For example, the overall sentiment from many reviewers, the perspectives of many news sources, or the knowledge of many different experts.

⚠ Now of course LLMs have their issues, like hallucinations and regurgitating questionable content. That should be less of an issue when the LLM is applied to a limited use case, such as being used as an assistant for a support site. The technology is also still very young and these issues are the most significant of LLMS, so they’re likely to be addressed fairly quickly. 🤞

I’m most excited for AI’s assistance in the medical field. Doctors specialize; they focus where their expertise will be out of necessity. The flip side of that: they can’t be experts about everything. LLMs and AI, meanwhile, are capable of accessing deep knowledge on just about everything, and quickly. At the very minimum they can whittle down the vast possible diagnoses to a few that a doctor can focus on, and likely also assist in making an accurate and precise diagnosis (not to mention assisting with the prognosis and treatment, too).

tl;dr

The evolution of communication from verbal and hand-written to digital has made the dissemination of knowledge very quick; i.e., one-to-few ephemerally to one-to-many and many-to many durably. But with the vastness of content it can be hard for the right knowledge to reach those who need it.

AI is helping us (among many other things) individually access that knowledge more quickly and easily — many-to-one.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.