I think this is where a lot of LLMs will land, in local usage. At first people will try to use big general models for their rather domain-specific tasks until they realize smaller specialized models can do the same thing, but cheaper, with no subscription or per-token costs and with full ownership of the model, not just renting a model.
Imagine having a model that is proprietary to your company, cost nothing but the cost of electricity to run, no renting computing capacity etc. You own the model and can customise it at will and have no restrictions.
Currently super large models are driven by a hope for AGI capabilities, but we are not there yet for years and LLMs will never get us there. It requires different architectures.
I run ministral 8B on my laptop and as long as I don’t ask it too complex tasks it can do things like translating, explaining simple things or help me understand functions and basic code while I am learning. If it can web search, RAG search and index you don’t need that big of a model for it to work as a decent assistant.
If you want to replace people and not just enhance their work, then at current prices i don’t think you get your bang for the bucks. A model owned and run by someone else outside the company is a just a very expensive consultant… They’ll leave with all their competence if you stop paying them. An in-house AI will never leave and only get better with time and you don’t have to pay anything other than electricity and the initial capital cost for the server.
I give it a year or two before companies realise this, but first they need to realise that the money they spend on subscriptions and tokens aren’t investments, but rather costs. They don’t get the money back. With a local model, it is an investment that keeps on giving after initial capital investment and cost much less for repetitive work.
Mistral is betting on this and I think it will pay off. Unless I am wrong about AGI.
I think this is where a lot of LLMs will land, in local usage. At first people will try to use big general models for their rather domain-specific tasks until they realize smaller specialized models can do the same thing, but cheaper, with no subscription or per-token costs and with full ownership of the model, not just renting a model.
Imagine having a model that is proprietary to your company, cost nothing but the cost of electricity to run, no renting computing capacity etc. You own the model and can customise it at will and have no restrictions.
Currently super large models are driven by a hope for AGI capabilities, but we are not there yet for years and LLMs will never get us there. It requires different architectures.
I run ministral 8B on my laptop and as long as I don’t ask it too complex tasks it can do things like translating, explaining simple things or help me understand functions and basic code while I am learning. If it can web search, RAG search and index you don’t need that big of a model for it to work as a decent assistant.
If you want to replace people and not just enhance their work, then at current prices i don’t think you get your bang for the bucks. A model owned and run by someone else outside the company is a just a very expensive consultant… They’ll leave with all their competence if you stop paying them. An in-house AI will never leave and only get better with time and you don’t have to pay anything other than electricity and the initial capital cost for the server.
I give it a year or two before companies realise this, but first they need to realise that the money they spend on subscriptions and tokens aren’t investments, but rather costs. They don’t get the money back. With a local model, it is an investment that keeps on giving after initial capital investment and cost much less for repetitive work.
Mistral is betting on this and I think it will pay off. Unless I am wrong about AGI.