Graphs like this will always be population graphs. Russia is the 9th most populated country with 140 million people.
❤️ İstanbul ❤️
- 1 Post
- 9 Comments
Elden Rings runs better on proton/wine/dxvk then it does on Windows.
herseycokguzelolacak@lemmy.mlto Memes@lemmy.ml•The Brandenburg Gate in Berlin: 1940 vs 2023162·2 days agoGermany’s support for Israel shows that Germany learned nothing from the tragedy of the Holocaust.
The Holocaust teaches us that genocide and mass murder are wrong and evil. Yet modern German government supports Israel in comitting genocide against Palestinians.
There will be another memorial in Berlin 50 years from now, commemorating the Palestinians that were killed in the genocide Germany supported.
herseycokguzelolacak@lemmy.mlto Linux@lemmy.ml•Occurences of swearing in the Linux kernel source code over time4·5 days agoSwearing in source code points to a healthy and organic development.
herseycokguzelolacak@lemmy.mlto Selfhosted@lemmy.world•What is a self-hosted small LLM actually good for (<= 3B)English1·5 days agoNot on top of my head, but there must be something. llama.cpp and vllm have basically solved the inference problem for LLMs. What you need is a RAG solution on top that also combines it with web search.
herseycokguzelolacak@lemmy.mlto Selfhosted@lemmy.world•What is a self-hosted small LLM actually good for (<= 3B)English51·6 days agofor coding tasks you need web search and RAG. It’s not the size of the model that matters, since even the largest models find solutions online.
herseycokguzelolacak@lemmy.mlto LocalLLaMA@sh.itjust.works•LLMs and their efficiency, can they really replace humans?English1·6 days agoLLMs are great at automating tasks where we know the solution. And there are a lot of workflows that fall in this category. They are horrible at solving new problems, but that is not where the opportunity for LLMs is anyway.
herseycokguzelolacak@lemmy.mlto LocalLLaMA@sh.itjust.works•Current best local models for tool use?English1·6 days agoFor VLMs I love Moondream2. It’s a tiny model that packs a punch way above its size. Llama.cpp supports it.
on linux it is easy to record the audio that is sent to the sound device. lots of programs do it.
onthespot doesn’t look too bad if you want to download from spotify directly.
I just download from youtube though. It’s very easy with yt-dlp.