Another one that makes sense is having an AI monitor system stats and “learn” patterns in them, then alert a human when it “thinks” there’s an anomaly.
It’s data collection like you mentioned in your original post, and it uses the same sort of approach to ingesting that data as an LLM does for text.
As for a valid use of LLMs: Natural language searching (with cited sources) is a use case that it’s already doing. This is especially useful in highly technical fields where the end users have the expertise to vet responses but there’s way too much data for a human to parse.
Another one that makes sense is having an AI monitor system stats and “learn” patterns in them, then alert a human when it “thinks” there’s an anomaly.
In the best cases, those would be ML but not specifically an LLM, no?
It’s data collection like you mentioned in your original post, and it uses the same sort of approach to ingesting that data as an LLM does for text.
As for a valid use of LLMs: Natural language searching (with cited sources) is a use case that it’s already doing. This is especially useful in highly technical fields where the end users have the expertise to vet responses but there’s way too much data for a human to parse.
But one big LLM trained on everything isn’t that.