

Theres also the upcomming Framework desktops with 128GB of unified ram for ~$2500
Theres also the upcomming Framework desktops with 128GB of unified ram for ~$2500
I havent tried kobold. I have tried silly tavern, which I think is similar, but that wasnt really what I wanted as I dont want to use the LLM as a character but as an editor.
Things like highlight sections, ask the llm to review something about it, include other files as context (worldbuilding, lore material backstory etc) and easily insert bits of the text back into the main body. As I said I’ve used pycharm with AI integration for doing this but then you’re using a code editor which doesnt really have features that would be nice for writing prose. I was wondering if there was anything off the shelf (or close to) that combined the two.
I’ll give it a try thanks.
I actually already use emacs, I just find configuring it a complete nightmare. Good to know its an option though
Sure! here’s an approriate version of “I’m a little teapot” modified to suit you:
I'm a Little Free Thinker
(To the tune of "I'm a Little Teapot")
I'm a little genius, hear me shout,
"You're just AI!" when I lose out.
Facts and logic? Don't need those —
I just point fingers and strike a pose!
When you say something I don't like,
I cry "bot!" and grab my mic.
No real human could disagree,
So clearly you're ChatGPT!
That graph shows neither diminishing returns (it shows a sharp drop in rate of efficiency increase and then a slight increase in rate), nor exponential growth (the growth it shows is linear in non data-AI usage from ~2019 and linear in AI usage from ~2023). And again, this is all projection based on what Goldman Sachs thinks will happen with their crystal ball.
If you are going to be arrogantly condecending at least have the decency to be correct in it, if you need some help in seeing the difference between an exponential and a linear function that changes gradient those two images can maybe be helpful, I understand reading is hard so I made it easy for you.
AI usage is projected to outpace cities soon.
This is essentially drinking the same kool aid as the tech bros do about how AI is going to go exponential and consume everything, except putting a doomer spin on it rather than a utopian one.
Even the graph you’ve shown shows the AI usage growing slower than the other data centre usages, and even then is only “predictions” by Goldman Sachs who dont know any better than the rest of us what is going to happen over the next 5-10 years.
I honestly find this obsession with LLM energy usage weird. The paper listed gives typical energy usage per query at around 1Wh for most models at a reasonable output length (1000 tokens). A typical home in the UK directly uses around 7,400 Wh of electricity and 31,000 Wh of gas per day.
I just don’t see why some people are obsessing over something which uses 0.01% of someone’s daily electricity usage as opposed to far more impactful things like decarbonising electricity generation, transport and heating.
Where are you getting that from? Its not on the linked firefox terms of use, or on the linked mozzilla account terms of service.
deleted by creator
The point you are missing is that yes, asking an LLM about these things is not at the level of advice from someone who knows their stuff. But if you dont know what you are doing and dont know enough to even know what the right things to search for are then even partialy useful advice about the thing you are trying to do is a massive help.
What do you use as a vm on an arm mac? I was looking into this a while back to run linux on my work m3 macbook but i couldnt find any good options
Local models WOULD form the basis of FOSS AI. Supposition on my part but entirely supportable given there is already a open source model movement focus on producing local models and open source software is generally privacy focused.
Local models ARE inherently private due to the way that no information leaves the device it is processed on.
I know you dont want to engage with arguments and instead just wail at the latest daemon for internet points, but you can have more than one statement in a sentence without being incoherent.
So its settled that neurons are the only way to create inteligence? Again you need to get your work published, it’s clearly groundbreaking that you’ve solved these long standing disputes.
It doesnt though, local models would be at the core of FOSS AI, and they dont require you to trust anyone with your data.
It takes only an incredibly basic knowledge of computers and brains to know that we cannot make an actual intelligent program using the Von Neumann style of computer.
Nice to hear that it only takes a very basic knowledge of computers to settle one of the most hotly disputed issues in philosophy and computing. You should let them know you’ve decided it.
The point you seem to be missing is that it isnt that linux is too hard for average people, it isnt id even say its easier than windows to use for people who arent deep into using windows already.
Its that installing any os is too much for regular people and Microsoft have been fighting dirty and abusing market dominance to make sure they kill pre-installed linux machines.
I mean gnome and kde both have it so that doesnt feel correct for why macos doesnt.
Yup, I tried to run the docker image with the suggested docker command and it errored out for lack of a config file (though it did offer a fix in the logs for mounting the current directory as read/write)