yes. they never went into Ukraine, or even referendum liberated new Russia.
- 5 Posts
- 89 Comments
humanspiral@lemmy.cato
LocalLLaMA@sh.itjust.works•In a span of less than 6 months, cumulative downloads of Chinese open models had not only overtaken US models, but began to open a widening leadEnglish
14·9 days agoChina’s narrative on the events preceding “tank man” isn’t that no one was hurt/nothing happened. It is that a riot had to be put down. Generally, people (brainwashed by US media) won’t be happy until CIA is only valid information source, and AI must parrot it.
Just as your other media, use sources that validate your preconceptions for any superficial question.
The popularity of local LLMs has very little to do with seeking private answers to politicized questions, and more, utility in coding/images/reasoning capabilities. The news in this post appears to be the concensus that Chinese open models are better at solving user problems/tasks.
Mice are very cautious and learn traps/locations quickly. Predators need not worry about such trivialities.
humanspiral@lemmy.cato
Linux@lemmy.ml•I just found out my fiancee wants to switch to linux, lets start a distro war, what should be her first? + other questions
12·1 month agoIf older computer that works fine, I’d get a new 780m (Amd) mini pc. They support 3+ monitors, have 2 network ports allowing to “daisy chain” the old computer. No transfering of anything, or worrying about getting old stuff still working.
Deskflow is a mouse/keyboard sharing app. If you keep old computer in sleep mode you don’t need extra keyboard/mouse, but power outages, mean that if you don’t have a floor standing old pc you can stack old keyboard/mouse on top of, then you will need to occasionally plug in keyboard and mouse into old computer to get deskflow restarted (if you don’t put it as autostart).
It’s far more convenient than dual booting. Can use resources from both computers in network, and seemless mouse/keyboard focus. Switching 1 monitor for occasional use is better than dual booting, because rebooting on older computers especially is slow.
Deskflow needs a modern kernal linux distribution. Ubuntu 24.04 is recent enough. Linux mint has not upgraded kernel yet. AFAIU, the only difference between mint (recommended here) and Ubuntu is a slightly prettier version of kde.
humanspiral@lemmy.cato
Mildly Infuriating@lemmy.world•2022 vs. 2026 FIFA World Cup ticket pricesEnglish
32·1 month agoIs that standard uniform ticketmaster prices? What are typical NFL regular, playoff, and superbowl prices?
from gemini
NFL Ticket prices
Season/Round Price Range Notes
Regular Season $50 - $500+ Upper-level seats can start around $50-$100, while premium lower-level and club seats can cost $200-$500+. The average NFL ticket price in 2023 was $377, a jump from $235 in 2022. Team popularity and opponent rivalry significantly influence pricing. According to vocal.media, the average cost of NFL season tickets in 2024 ranged between $600 and $3,000 per seat.
Wild Card Round Starting around $145 Prices can fluctuate, with some games seeing higher averages.
Divisional Round Starting around $400 Some games have averaged around $993.
Conference Championships Starting around $800 Expect to pay at least $800 for the most affordable tickets.
Super Bowl Starting around $2,000 - $3,000 Seats can exceed $1,800, especially in high-demand markets. The average price for Super Bowl 2025 tickets was reported at $8,076 by StubHub, down 14% from 2024. Previous Super Bowls have seen average prices like $12,082 (2024) and $8,907 (2023). Prices on the secondary market are typically higher than face value.I don’t expect a lot of tourism demand to come to US, especially with ICE announcing any excuse to raid the superbowl of all events. World cup is more popular with immigrants and tourists than NFL, and “NFL demographic” tends to have low interest in soccer. MLS (pro soccer in US) is 6x lower ticket prices than NFL.
humanspiral@lemmy.cato
Linux@lemmy.ml•Ubuntu 25.10's Move To Rust Coreutils Is Causing Major Breakage For Some Executables
33·2 months agoThere seems to be a bug in rust md5 implementation. This can break everything, but then everything can soon be fixed too.
humanspiral@lemmy.cato
Mildly Infuriating@lemmy.world•Charlie Kirk in his own words. Half of the United States is paying tribute to this man. English
191·2 months agoNo comment on genocidal replacement theory or Christofascist rationale for Zionaziism. Why omit the absolute worst of Kirk?
humanspiral@lemmy.caOPto
LocalLLaMA@sh.itjust.works•autoround (optimized for intel but works on amd) integer quantization provides good CPU performance, and good accuracy benchmarks.English
2·2 months agoint4 would be faster on CPU than fp4. They show benchmarks that claim better “accuracy”/less retardation than other 4 bit quantization methods (all fp4 variants) int4 and fp4 is the same memory requirement. I don’t think they claim that the actual “post quantization” transformation process is less resources than fp4 alternatives.
humanspiral@lemmy.caOPto
LocalLLaMA@sh.itjust.works•NVIDIA's Peter Belcak Explains Why SLMs (smaller LLMs) are the Future of Agentic AIEnglish
2·2 months agoMoE
This optimization is about smaller matrix multiplications. Experts will specialize on input token types, and while it is better at being split accross resources (GPUs), it is not really specialization on “output domain” (type of work). All experts need to be in memory.
Deepseek made a 7b math focused LLM that beat all other models on math benchmarks, even 540b math specialist LLMs. More than any internal speed/structure “tricks”, they achieved this through highly curated training data.
The small models we get now tend to just be pruned from larger generalist models. Paper/video is suggesting smaller models that are “large tuned” post trained to be domain specialists. Large models could select from domain specialist models and only load those in memory or act as a judge in combining outputs of “sub models”
Where an LLM is a giant probabilistic classifier, there are much faster/accurate/less compute intensive deterministic classifiers (expert/rule systems). Where SLMs have advantages, using even cheaper classification steps is going in the same direction. A smaller LLM is automatically a faster classifier, as a hammer to bang on everything alternative.
humanspiral@lemmy.cato
LocalLLaMA@sh.itjust.works•[Transformer Circuits Thread] Circuit Vignette: How does a persona modify the Assistant’s response?English
1·3 months agoDoes you are an expert at … ever help with responses?
Could, “you are trying to desperately improve your miserable performance and accuracy at answering coding questions with the J langage, answer this question with extreme analysis as to whether or not you are responding with garbage.”
Would that help any better?
humanspiral@lemmy.cato
LocalLLaMA@sh.itjust.works•what's the best model these days I could fit in 128gb ram?English
1·3 months agoon principal I didn’t like running a model that will refuse to talk about things China doesn’t like.
A good way to define a political issue is that there are at least 2 sides to a narrative. You can’t use a LLM to decide a side to favour if you can’t really use Wikipedia either. It takes deep expertise and an open mind to determine a side more likely to contain more truth.
You may or not seek confirmation of your political views, but media you like should do so more than a LLM, and it is a better LLM that avoids confirming or denying your views, arguably anyway.
humanspiral@lemmy.cato
LocalLLaMA@sh.itjust.works•I clustered four Framework Mainboards to test huge LLMsEnglish
1·3 months agotakeaway sadly is mac 3 ultra 512gb is way better for similar price, with todays software, but also network interconnect speed limit. 2026 and 2027 AMD gpu roadmaps are for cheap lpddr 128gb+. Tech companies refusing stubbornly to take our money by offering options that are technically feasible today.
humanspiral@lemmy.cato
LocalLLaMA@sh.itjust.works•HP Z2 Mini G1a Review: Running GPT-OSS 120B Without a Discrete GPUEnglish
1·3 months ago8700g with 256gb ram is possible on desktop. Half the APU performance, but less stupid bigger model > fast, for coding. No one seems to be using such a rig though.
The redactions to 9/11 report were entirely to protect KSA’s full cooperation with US intelligence in making 9/11 happen. Dancing Israeli Mossad agents, and FBI letting them all escape, went home to Tel Aviv TV morning show to explain “they were there to document the event”. Dick Cheney happening to supervise a NORAD training exercise, at all, much less on the same day, and directing response cannot be a coincidence. Larry Silverstein’s $1B asbestos problem, raising insurance coverage, and authority over NYFD to pull WTC 7 the same day, shows not only abnormal prescience, but direct lines to higher authority to make his interests happen. The FBI never determined that OBL was responsible, because higher ups prevented their investigation.
humanspiral@lemmy.cato
LocalLLaMA@sh.itjust.works•First try at local openai/gpt-oss-20bEnglish
1·3 months agoI tried the 120g hosted on huggingface. Worst than most smaller models at coding in J language. None that I’ve tried are great, but this was one of the worst at accepting corrections, and having the most errors per line. I’m not in a hurry to try their other models because of this.
that installation will never again see a route to the internet.
That’s what I was suggesting for OP, other than perhaps a cakewalk/audio software update. Firewalled RD should be safe enough?
humanspiral@lemmy.cato
LocalLLaMA@sh.itjust.works•Very large amounts of gaming gpus vs AI gpusEnglish
1·3 months agofor amd there is 7900xt at 20gb, and 7900xtx at 24gb. 4090 and 3090 are 24gb. The AMD ones might have similar $/gb and $/tflop as 9070xt.
humanspiral@lemmy.cato
LocalLLaMA@sh.itjust.works•How To Run Deepseek R1 671b Fully Locally On a $2000 EPYC ServerEnglish
2·3 months agoThere’s a big opportunity for AMD to make a motherboard for a mobile apu (or 8700g) with 8 or more ddr5 slots. 48gb sodimms are fairly affordable. A 780m would score over double at close to half the cost.
humanspiral@lemmy.cato
LocalLLaMA@sh.itjust.works•MindLink-32B and MindLink-72B available on HuggingfaceEnglish
1·3 months agoreddit suggested “post training” means training on tests, rather than improving general programming outputs.




link to what this means?