For real, it almost felt like an LLM written article the way it basically said nothing. Also, the way it puts everything in bullet points is just jarring to read.
Hi! I am Creesch, also creesch on other platforms :)
For real, it almost felt like an LLM written article the way it basically said nothing. Also, the way it puts everything in bullet points is just jarring to read.
True, though that isn’t all that different from people doing knee jerk responses on the internet…
I am not claiming they are perfect, but for the steps I described a human aware of the limitations is perfectly able to validate the outcome. While still having saved a bunch of time and effort on doing an initial search pass.
All I am saying is that it is fine to be critical of LLM and AI claims in general as there is a lot of hype going on. But some people seem to lean towards the “they just suck, period” extreme end of the spectrum. Which is no longer being critical but just being a reverse fanboy/girl/person.
I don’t know how to say this in a less direct way. If this is your take then you probably should look to get slightly more informed about what LLMs can do. Specifically, what they can do if you combine them with with some code to fill the gaps.
Things LLMs can do quite well:
These are all the building blocks for searching on the internet. If you are talking about local documents and such retrieval augmented generation (RAG) can be pretty damn useful.
You are glossing over a lot of infrastructure and development, when boiled down to the basics you are right. So it is basically a question of getting enough users to have that app installed. Which is not impossible given that we do have initiatives like OpenStreetMap.
At least for the instance this was posted on: the February 2024 Beehaw Financial Update
If everything you have read is saying that it is fine, then why does it not feel right for you? Looking around I do get the same impression, it is non-combustible so there is not really a concern there. Basically from what I gather as long as you use the proper wire for use in walls/isolation, leave enough space and generally take good practices in account like using conduit where needed you should be good to go.
I am not an electrician though and certainly not aware of your local code and regulations.
Talking about electricians, if you are worried about doing it not right, why not hire one to do it for you?
Long term wearing of vr headsets might indeed be not all that good. Though, the article is light on actual information and is mostly speculation. Which for the Apple Vision Pro can only be the case as it hasn’t been out long enough to conduct anything more than a short term experiment. So that leaves very little data in the way of long term data points.
As far as the experiment they did, there was some information provided (although not much). From what was provided this bit did stand out to me.
The team wore Vision Pros and Quests around college campuses for a couple of weeks, trying to do all the things they would have done without them (with a minder nearby in case they tripped or walked into a wall).
I wonder why the Meta Oculus Quests were not included in the title. If it is the meta Quest 3, it is fairly capable as far as pass through goes. But, not nearly as good as I understand the Apple Vision Pro’s passthrough is. I am not saying the Apple Vision Pro is perfect, in fact it isn’t perfect if the reviews I have seen are any indicator. It is still very good, but there is still distortion around edges of vision, etc.
But given the price difference between the two I am wondering if the majority of the particpants actually used Quests as then I’d say that the next bit is basically a given:
They experienced “simulator sickness” — nausea, headaches, dizziness. That was weird, given how experienced they all were with headsets of all kinds.
VR Nausea is a known thing even experienced people will get. Truly walking around with these devices with the distorted views you get is bound to trigger that. Certainly with the distortion in pass through I have seen of Quests 3 videos. I’d assume there are no Quests 2 in play as the passthrough there is just grainy black and white video. :D
Even Apple with all their fancy promo videos mostly shows people using the Vision pro sitting down or in doors walking short distances.
So yeah, certainly with the current state of technology I am not surprised there are all sorts of weird side effects and distorted views of reality.
What I’d be more interested in, but what is not really possible to test yet, is what the effects will be when these devices become even better. To the point where there is barely a perceivable difference in having them on or off. That would be, I feel, the point where some speculated downsides from the article might actually come into play.
They’re for different needs.
Yes… but also extremely no. Superficially you are right, but a lot of the arguments of why many new distros are created is just because of human nature. This covers everything from infighting over inane issues to more pragmatic reasons. A lot of them, probably even a majority, don’t provide enough actual differentiators to be able to honestly claim that it is because of different needs. In the end it all boils down to the fact that people can just create a new distro when they feel like it.
Which is a strength in one way, but not with regard to fragmentation.
I am not quite sure why there are all these bullet points that have very little todo with the actually issue.
Researchers at the University of Wisconsin–Madison found that Chrome browser extensions can still steal passwords, despite compliance with Chrome’s latest security standard, Manifest V3.
I am not sure how Manifest V3 is relevant here? Nothing in Manifest V3 suggests that content_scripts can’t access the DOM.
The core issue lies in the extensions’ full access to the Document Object Model (DOM) of web pages, allowing them to interact with text input fields like passwords.
I’d also say this isn’t directly the issue. Yes, content_scripts needing an extra permissions to be able to access password input fields would help of course.
Analysis of existing extensions showed that 12.5% had the permissions to exploit this vulnerability, identifying 190 extensions that directly access password fields.
Yes… because accessing the DOM and interacting with it is what browser extensions do. If anything, that 12.5% feels low, so I am going to guess it is the combination of accessing the DOM and being able to phone home with that information.
A proof of concept extension successfully passed the Chrome Web Store review process, demonstrating the vulnerability.
This, to me, feels like the core of the issue right now. The behavior as described always has been part of browser extensions and Manifest V3 didn’t change that or made a claim in that direction as far as I know. So that isn’t directly relevant right now. I’d also say that firefox is just as much at risk here. Their review process over the years has changed a lot and isn’t always as thorough as people tend to think it is.
Researchers propose two fixes: a JavaScript library for websites to block unwanted access to password fields, and a browser-level alert system for password field interactions.
“A javascript library” is not going to do much against content_scripts of extensions accessing the DOM.
The alert system seems better indeed, but that might as well become browser extension permission.
To be clear, I am not saying that all is fine and there are no risks. I just think that the bullet point summary doesn’t really focus on the right things.
It still does? That is an entirely different page and still shows the newest videos of channels you are subscribed to. At least, for me it does.
Nextcloud can do this and replace a bunch of other google services in the process.
Looking at what you said so far though I am not entirely sure if you want to go down the route of self hosting yet. Which is okay, it involves a lot of work and knowledge to do right. Something you might not want to risk your contacts for if you are still learing. There are services that provide nextcloud hosting. Personally I am using Hetzner, a Germany based hosting provider: https://www.hetzner.com/storage/storage-share
Edit:
I forgot to mention, you’ll also need to do some fiddling with your phone to sync things: https://docs.nextcloud.com/server/latest/user_manual/en/groupware/sync_android.html
I am dissapointed in that I have not been able to get a single mathematic equation produced (like famous ones), but I know they can?
Well, my understanding is that they actually can’t. LLM’s do “language” mostly based on what is called “next word prediction” so they basically look at the word and predict what the next most logical word would be. (Somewhat simplified). So numbers to them are not numbers but words, which is why they are fairly bad at them.
Opera has Aria, which is like the cleanest version of ChatGPT
Pass, not sure what stake the chinese owners have these days but Opera is a bit too… feature rich in everything.
I do like working with just chat.openai.com for simple stuff. It is great at helping my debug things in areas I don’t quite have all the knowledge I’d like. For example, I had to work on a shell script earlier in bash. Something I don’t do often and as an added bonus it needed to work on both macOS machines and the bash version shipped with “git bash” on windows. MacOS GNU utils already function slightly differently at times, but git bash on windows is entirely broken in some areas. Where yesterday I spend an hour trying to find something relevant based on my input and the error I got through google chatGPT just managed to point out the pain point right away.
And that is where I feel chatGPT (in this case anyway) does a great job, troubleshooting issues about things that are not necessarily bleeding edge. I just presented it with a clear problem and a bit of context and asked why that could be the case. It also got it wrong a few times, but that is fine, it did safe me a bunch of time in the end.
Bing and Google Bard keep disappointing me. Bing for some reason only picks up on half of what I ask. Which is extremely odd as it is supposedly is ChatGPT based and ChatGPT gives pretty good answers on the same queries. The only problem with the latter is that a lot of it is of course outdated.
Bard might just be broken for me. I keep getting I'm a text-based AI, and that is outside of my capabilities.
or similar responses.
I realize you asked for other recommendations, but I suspect you don’t want to actually maintain your own music library but rather want streaming services recommended?
Of the two alternatives you are currently looking for I do have experience with Deezer, although it has been two years at least. The music library is almost as complete as Spotify in my experience I rarely had issues with songs not being on there. The recommendation algorithm at the time was nice, but would sometimes get stuck in a hyper specific genre that would only reinforce itself.
For HIFI audio you basically do need fairly good audio gear for it (decent wired headphones for example), I’d say that for most people it is not worth paying extra for as it is really difficult to tell the difference.
One other service I have used is Youtube Music as it is included in premium. It does not have an HIFI option but otherwise is fairly okay. Basically worth looking into if you were also considering Youtube premium, but otherwise not really special.
Yeah, you raise some valid points about the future of reddit itself and communities being forced. A few things I specifically still want to reply to:
I guess I also don’t get the concern about picking “the right lemmy instance” - at worst, it’s like picking an e-mail server, or grocery store. Try a random one, find out what doesn’t work for you (if anything) and then use that knowledge to evaluate the next one.
Well yeah, but that is in hindsight easy to say. If all you have heard is “Lemmy” and you start looking things up it can become a bit overwhelming and dififcult to figure out. Also, ironically, because a lot of people are trying to put information out there. But, not everyone is good at actually creating easy to follow resources. Also, from a user perspective, you are entirely right. From a community perspective it is slightly more complex. You either need to find the money and people with technical know how to host your own instance or find a reliable instance that allows community creation.
I tend to quote and comment on the part of a comment I’m replying to that I have something to say about it.
On reddit I, personally, also wouldn’t have assumed that to be the intent. Often because that is not what is happening. What I often do when I just want to reply to something specific is stating it. Something along the lines of “I generally agree with your post/comment, but this part specifically, I do have a slightly different view of” and then follow with the quote.
this is a rant (so don’t take it that seriously)
Heh, some people want their rants to be taken very seriously :) So again, just add it as context. Not just state that it is a rant, but that because of it is doesn’t have to be taken seriously.
Frankly, you are taking a too binary approach to the subject of your rant. There are tons of Lemmy instances, so figuring out the right one isn’t as straightforward as stumbling upon a single central platform.
This just feels like a cop out
No, I am just outlining several factors that come into play that do weigh in for people. I am not just saying it is difficult to find Lemmy instances. I am saying it is difficult to move entire communities over. I am also saying several other things than just “moving difficult”. To be honest, I highly suggest you go back and ready my comment again with the intent of seeing the nuance.
This is such a cynical take. Contrary to popular belief, the vast majority of moderators do care about their subreddits or else they wouldn’t be volunteering their free time. The allure of the power to remove some random person’s post on the Internet, or to ban them just so they return with another account, pales in comparison to the thrill of watching your community grow and people having fun because of it. And it’s not this weird selfish, hey-look-at-me-I’m-so-successful kind of thrill, it’s like you joined this thing because you are interested it and now all these other people who are also interested in it are there talking about it. That’s what’s cool, you set off to make this place where people can talk about this thing that you think is cool and you get to watch it grow and be successful over time. Some of these communities have been around for over a decade, so, people have invested time and effort into them for over a decade.
Moving to elsewhere isn’t really as easy as people make it out to be. At the moment “moving communities” means fracturing your community as there is no unified approach to doing this.
The operative word being “unified” which is next to impossible to achieve. If you get all mods to agree you will have a hard time reaching all your users. This in itself presents the biggest roadblock, ideally you’d close up shop and redirect users to the new platform. Reddit will most certainly not allow this, their approach to protesting subreddits that were not even aiming to migrate made that abundantly clear.
So this means that, at the very least, you are looking at splitting your community over platforms. This is far from a unified approach.
This isn’t even touching on the lack of viable long term platforms out there. I’d love for people to move to Lemmy. But realistically speaking Lemmy is very immature, instance owners are confronted with new bugs every day, not to mention the costs of hosting an instance. That also ignores the piss poor state the moderation tooling is in on Lemmy. The same is true for many of the possible other “alternatives”.
All the new attention these platforms have gotten also means they are getting much more attention from developers. So things might change in the future for the better, in fact I am counting on it. But that isn’t the current state of the fediverse. Currently most of the fediverse, specifically Lemmy is still very much in a late Alpha maybe early Beta state as far as software stability and feature completeness goes.
And, yes, the situation on reddit is degrading and this latest round of things has accelerated something that has been going on for a while. But at the same time Reddit is the platform that has been around for a decade and where the currenty community is. Picking that up and moving elsewhere is difficult and sometimes next to impossible. I mean we haven’t even talked about discoverability of communities for regular users.
Lemmy (or any fediverse platform) isn’t exactly straightforward to figure out and start participating in. If you can even find the community you are looking for. Reddit also hosts a lot of support communities, who benefit from reddit generally speaking having a low barrier of entry. Many of those wouldn’t be able to be as accessible for the groups they are targeting on other platforms.
Doesn’t matter too much, but if it does not hurt, then I would continue like this. Or does it somehow spam the notification?
In that case, just continue :) I just did happen to notice it but it doesn’t lead to extra notifications or annoyances.
Fyi, you don’t need to ping someone when replying to them ;)
Anyway, yeah I get that it is controversial or already was. But you said it in isolation while the blog post explicitly goes into that choice which I think is important for context.
What do you mean by “it”? The chatGPT interface? Could be, but then you are also missing the point I am making.
After all, chatGPT is just one of the possible implementations of LLMs and indeed not perfect in how they implemented some things like search. In fact, I do think that they shot themselves in the foot by implementing search through bing and implementing it poorly. It basically is nothing more than a proof of concept tech demo.
That doesn’t mean that LLM’s are useless for tasks like searching, it just means that you need to properly implement the functionality to make it possible. It certainly is possible to implement search functionality around LLMs that is both capable and can be reviewed by a human user to make sure it is not fucking up.
Let me demonstrate. I am doing some steps that you would normally automate with conventional code:
I started about by asking chatGPT a simple question.
It then responded with.
The following step I did manually, but is something you would normally have automated. I put the suggested query in google, I quickly grabbed the first 5 links and then put the following in chatGPT.
It then proceeded to give me the following answer
Going over the search results myself seems to confirm this list. Most importantly, except for the initial input, all of this can be automated. And of course, a lot of it can be done better, as I didn’t want to spend too much time.