PM_ME_VINTAGE_30S [he/him]

Anarchist, autistic, engineer, and Certified Professional Life-Regretter. If you got a brick of text, don’t be alarmed; that’s normal.

No, I’m not interested in voting for your candidate.

  • 1 Post
  • 80 Comments
Joined 1 year ago
cake
Cake day: July 9th, 2023

help-circle

  • PM_ME_VINTAGE_30S [he/him]@lemmy.sdf.orgtoMemes@lemmy.mlAI bros
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    2 months ago

    “Gradient descent” ≈ on a “hilly” (mathematical) surface, try to find the lowest point by finding the lowest point near an initial guess. “Gradient” is basically the steepness, or rate that the thing you’re trying to optimize changes as you move through “space”. The gradient tells you mathematically which direction you need to go to reach the bottom. “Descent” means “try to find the minimum”.

    I’m glossing over a lot of details, particularly what a “surface” actually means in the high dimensional spaces that AI uses, but a lot of problems in mathematical optimization are solved like this. And one of the steps in training an AI agent is to do an optimization, which often does use a gradient descent algorithm. That being said, not every process that uses gradient descent is necessarily AI or even machine learning. I’m actually taking a course this semester where a bunch of my professor’s research is in optimization algorithms that don’t use a gradient descent!


  • They created a good product so people used it and there were no alternatives when it got shit.

    They created an inherently centralizing implementation of a video sharing platform. Even if it was done with good intentions (which it wasn’t, it was some capitalist’s hustle, and its social importance is a side effect), we should basically always condemn centralizing implementations of a given technology because they reinforce existing power structures regardless of the intentions of their creators.

    It’s their fault because they’re a corporation that does what corporations do. Even when corporations try to do right by the world (which is an extremely generous appraisal of YouTube’s existence), they still manage to create centralizing technologies that ultimately serve to reinforce their existing power, because that’s all they can do. Otherwise, they would have set themselves up as a non-profit or some other type of organization. I refuse to accept the notion of a good corporation.

    There’s no lock in. They don’t force you off the platform if you post elsewhere (like twitch did).

    That’s a good point, but while there isn’t a de jure lock-in for creators, there is a de facto lock-in that prevents them from migrating elsewhere. Namely, that YouTube is a centralized, proprietary service, which can’t be accessed from other services.









  • Russian bots aren’t all that bad

    Yes, in two senses:

    1. I don’t lose sleep at night knowing that these bots exist (or those of any other government). They shouldn’t exist for the simple reason that public institutions shouldn’t be in the business of deceiving people, but unfortunately, deceiving the public is a bunch of what the State actually fucking does “for” “us”. I especially don’t think the Russian government cares to run bots/trolls on our little corner of the internet when bigger targets exist.
    2. Vacuously, I don’t disagree with literally everything that the Russian bots say because they can be found saying just about anything.

    I cannot stress enough that I do NOT approve of state-sponsored botting or trolling of public spaces in general. However, when you see Pro-Russian or Pro-whatever opinions on the Internet, you are probably reading the words of a “useful idiot” or non-State troll.

    This reality is a lot scarier than if the opinions were all just from some Russian troll farm, because now we have to interrogate the reality that these people have different and complex reasons for why they ended up with those opinions. It means that the task of persuasion is a lot more complicated than just shielding people from bots and trolls.


  • if you find yourself on the same side as Russian bots and don’t find it so disturbing that you immediately change your position

    As mentioned by another commenter, the actual strategy of the real Russian government is to sow division by advocating a bunch of positions, so a particular position being presented by Russian trolls absolutely does not warrant immediately changing my position. Your position is not special in that regard.

    But more generally, I’m not going to change my position on anything solely because someone awful agrees with it.

    And even more generally, I don’t care about unifying people under the political agenda of any existing government or political party. I want to see people unified about organizing themselves. To that end, letting one of the existing political parties, including yours, dictate our political will to us goes against the goal of people organizing themselves.

    you can’t claim any high moral ground use that to lecture other people.

    I do not claim nor need the moral high ground to present my opinions. Same goes for everyone else.






  • A deep neural adaptive PID controller would be a bit overkill for a simple robot arm, but for say a flexible-link robot arm it could prove useful. They can also work as part of the controller for systems governed by partial differential equations, like in fluid dynamics. They’re also great for system identification, the results of which might indicate that the ultimate controller should be some “boring” algorithm.


  • Since I don’t feel like arguing

    I’ll try to keep this short then.

    How will these reasonable AI tools emerge out of this under capitalism?

    How does any technology ever see use outside of oppressive structures? By understanding it and putting to work on liberatory goals.

    I think that crucial to working with AI is that, as it stands, the need for expensive hardware to train it makes it currently a centralizing technology. However, there are things we can do to combat that. For example, the AI Horde offers distributed computing for AI applications.

    And how is it not all still just theft with extra steps that is imoral to use?

    We gotta find datasets that are ethically collected. As a practitioner, that means not using data for training unless you are certain it wasn’t stolen. To be completely honest, I am quite skeptical of the ethics of the datasets that the popular AI products were trained on. Hence why I refuse to use those products.

    Personally, I’m a lot more interested in the applications to robotics and industrial automation than generating anime tiddies and building chat bots. Like I’m not looking to convince you that these tools are “intelligent”, merely useful. In a similar vein, PID controllers are not “smart” at all, but they are the backbone of industrial automation. (Actually, a proven use for “AI” algorithms is to make an adaptive PID controller so that’s it can respond to changes in the plant over time.)


  • Disagree. The technology will never yield AGI as all it does is remix a huge field of data without even knowing what that data functionally says.

    We definitely don’t need AGI for AI technologies to be useful. AI, particularly reinforcement learning, is great for teaching robots to do complex tasks for example. LLMs have shocking ability relative to other approaches (if limited compared to humans) to generalize to “nearby but different, enough” tasks. And once they’re trained (and possibly quantized), they (LLMs and reinforcement learning policies) don’t require that much more power to implement compared to traditional algorithms. So IMO, the question should be “is it worthwhile to spend the energy to train X thing?” Unfortunately, the capitalists have been the ones answering that question because they can do so at our expense.

    For a person without access to big computing resources (me lol), there’s also the fact that transfer learning is possible for both LLMs and reinforcement learning. Easiest way to explain transfer learning is this: imagine that I want to learn Engineering, Physics, Chemistry, and Computer Science. What should I learn first so that each subject is easy for me to pick up? My answer would be Math. So in AI speak, if we spend a ton of energy to train an AI to do math and then fine-tune agents to do Physics, Engineering, etc., we can avoid training all the agents from scratch. Fine-tuning can typically be done on “normal” computers with FOSS tools.

    all it does is remix a huge field of data without even knowing what that data functionally says.

    IMO that can be an incredibly useful approach for solving problems whose dynamics are too complex to reasonably model, with the understanding that the obtained solution is a crude approximation to the underlying dynamics.

    IMO I’m waiting for the bubble to burst so that AI can be just another tool in my engineering toolkit instead of the capitalists’ newest plaything.

    Sorry about the essay, but I really think that AI tools have a huge potential to make life better for us all, but obviously a much greater potential for capitalists to destroy us all so long as we don’t understand these tools and use them against the powerful.