• 9 Posts
  • 357 Comments
Joined 1 year ago
cake
Cake day: July 4th, 2023

help-circle

  • rufus@discuss.tchncs.detoAndroid@lemdro.id*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    5 months ago

    Plug it into a computer and see what the computer says.

    I usually use Linux for that because it offers good error messages and I know the tools. But other operating systems might help, too.

    And if you start writing to the card or executing recovery tools, make a backup / image first.

    If the files are very important, maybe don’t tamper with it and ask for help. Like a repair shop, your local Linux community or any trustworthy computer expert friend.

    The biggest enemy is probably encryption, if it’s encrypted. The files are definitely still there if you just ripped it out. In the old days you could just run a recovery program and get everything back.



  • I think most people use something like exllamav2 or vllm or use GGUF to do inference and it seems neither of those projects have properly implemented multimodality or this specific model architecture, yet.

    You might just be at the forefront of things and there isn’t yet any beaten path you could follow.

    The easiest thing you could do is just use something that already exists, be it 4bit models, wait a few weeks and then upgrade. And I mean you can also always quantize models yourself and set the parameters however you like, if you have some inference framework that supports your model including the adapters for vision and has the quantization levels you’re interested in…


  • Well, I’d say there is information in language. That’s kinda the point of it and why we use it. And language is powerful. We can describe and talk about a lot of things. (And it’s an interesting question what can not be described with language.)

    I don’t think the stochastical parrot thing is a proper debate. It’s just that lots of people don’t know what AI is and what it can and cannot do. And it’s neither easy to understand nor are the consequences always that obvious.

    Training LLMs involves some clever trickery, limit their size etc so they can’t just memorize everything, but instead are forced to learn concepts behind those texts.

    I think they form models of the world inside of them. At least of things they’ve learned from the dataset. That’s why they can for example translate text. They have some concept of a cat stored inside of them and can apply that to a different language that uses entirely different characters to name that animal.

    I wouldn’t say they are “tools to learn more aspects about nature”. They aren’t a sensor or something. And they can infer things, but not ‘measure’ things like an X-ray.




  • Thanks for taking the time to explain it to me. The Github issue also is very helpful. Seems that’s exactly my answer to “Why do I need a fourth store in addition to F-Droid, AuroraStore and Obtanium” 😉

    Have a nice day, thanks for the STT keyboard! I didn’t really engage in the discussion because I’m exactly in the same situation as other people here. I already have the FUTO one and Sayboard… But eventually I’d like to replace FUTO software with free software alternatives. I don’t like their licensing. So this is very welcome.






  • I’m pretty sure he did this out of this own motivation because he thinks/thought it’s a fascinating topic. So, sure this doesn’t align with popularity. But it’s remarkable anyways, you’re right. And I always like to watch the progression. As far as I remember the early videos lacked professional audio and video standards that are nowadays the norm on Youtube. At some point he must have bought better equipment, but his content has been compelling since the start of his Youtube ‘career’. 😊

    And I quite like the science content on Youtube. There are lots of people making really good videos, both from professional video producers and also from scientists (or hobbyists) who just share their insight and interesting perspective.



  • Yeah, doesn’t really work. I mean it has a rough idea of that it needs to go east. And I’m surprised that it knows which interstates are in an area and a few street names in the cities. I’m really surprised. But I told it to get me from Houston to Montgomery as in your example. And in Houston it just tells random street names that aren’t even connected and in different parts of the city. Then it drives north on the I-45 and somehow ends up in the south on the I-610-E and finally the I-10-E. But then it makes up some shit, somehow drives to New Orleans, then a bit back and zig-zags it’s way back onto the I-10. Then some more instructions I didn’t fact check and it gets that it needs to go through Mobile and then north on the I-65.

    I’ve tested ChatGPT on Germany. And it also gets which Autobahn is connected to the next. It still does occasional zig-zags and in between it likes to do an entire loop of 50km (30 miles) that ends up 2 cities back where it came from… Drives east again and on the second try takes a different exit.

    However: I’m really surprised by the level of spatial awareness. I wouldn’t have expected it to come up with mostly correct cardinal directions and interstates that are actually connected and run through the mentioned cities. And like cities in between.

    I don’t think I need to try “phi”. Small models have very limited knowledge stored inside of them. They’re too small to remember lots of things.

    So, you were right. Consider me impressed. But I don’t think there is a real-world application for this unless your car has a teleporter built in to deal with the inconsistencies.