

I’m a researcher in ML and LLMs absolutely fall under ML. Learning in the term “Machine Learning” just means fitting the parameters of a model, hence just an optimization problem. In the case of an LLM this means fitting parameters of the transformer.
A model doesn’t have to be intelligent to fall under the umbrella of ML. Linear least squares is considered ML; in fact, it’s probably the first thing you’ll do if you take an ML course at a university. Decision trees, nearest neighbor classifiers, and linear models all are machine learning models, despite the fact that nobody would consider them to be intelligent.
This type of thing is mostly used for inference with extremely large models, where a single GPU will have far too little VRAM to even load a model into memory. I doubt people are expecting this to perform particularly fast, they just want to get a model to run at all.