
On 2/16/25 3:58 AM, Steve Litt via talk wrote:
OK, this is a very compelling reason. Just one more question...
In order to come up with answers anywhere near as good as ChatGPT's answers, wouldn't I need to have the equivalent of all of ChatGPT's web-acquired information, which, even if in digested form, would probably require a trillion TB disk? And what kind of monster CPU would be required to access and logic out such a plethora of info?
This is not correct but would be a way to think of the process. The thing about LLMs is that they are taking all that Exabytes of data and building relationships between that various bits of data That dataset is much smaller than the original data. Then there are various optimizations that are applied to make things smaller still. Think of all the stuff you have ever seen or read or done. You don't need to keep it all around to remember it. The downside is that you don't always remember it in totality or accurately. Human and AI memory is a lossy compression technique. -- Alvin Starr || land: (647)478-6285 Netvel Inc. || Cell: (416)806-0133 alvin@netvel.net ||