Modern superstitions

There has been some talk recently about how the enthusiasm for large language models (LLMs) as a proxy for AI (Artificial Intelligence)* resembles a religion or superstition.

In the sense that people already put way too much faith in the varacity of replies which are often factually wrong and that this in turn is having an impact on the body of knowledge (for example wrong information is polluting Wikipedia which then pollutes the information that the LLMs use which then further pollutes other things like newspapers and books, which then are used as source on Wikipedia.. and the cycle continues) and possibly even, eventually, the way we think. To the extent that we can no longer distinguish truth from lies.

Also see how the richest are stumbling over each other to give financial tributes to the technology.

On the other hand, isn't this what humans always do? Isn't technology almost always praised in a euphoric way by the rich? And isn't a ridiculous amount of effort put in to the "Thing"?

From Stonehenge to ChatGPT, people have been giving spiritual worthiness to inanimate objects. Or is there really something more dangerous and more sinister about this?


* Nobody sensible really thinks it is intelligence as the models simply reply within a statistical model of what a "good reply" would look like

Comments

  • KoF wrote: »
    Also see how the richest are stumbling over each other to give financial tributes to the technology.

    Naturally, it's the latest bezzle from which they hope to make or - in the case of the tech giants - perpetuate their fortune. It doesn't have to have a particularly religious impulse.
  • @KoF said,
    From Stonehenge to ChatGPT, people have been giving spiritual worthiness to inanimate objects.

    Or, depending on one's view of various inanimate objects considered to have spiritual essence, perceiving spiritual essence in them. (I'm not quite sure what spiritual worthiness would be, myself.) Certainly blessed objects are considered to have something special about them, not even getting into all manner of beliefs about other supernatural or spiritual essences attached to things all over the world and down through time.

    (Since it's prehistoric, I'm not sure how much was can know about whether Stonehenge was considered to have spiritual essence beyond what the stones would already have, but I'd consider it more likely than ChatGPT.)

    I do think that generative AI like ChatGPT (and its visual equivalents) is very dangerous for all sorts of reasons (not least of which include replacing people who make art of all kinds, but also spreading falsehoods and the like).
  • I meant "spiritual worthiness" in the sense that people invest a lot (in terms of money, finances, time) into an object that then gains importance beyond the natural qualities of the object.

    To be fair, I'm not sure it has ever happened before for something intangible, prior beliefs about the internet seemed to be related to hope of the potential for easy access to information.

    I don't have any notion of "blessed objects" but would think that everyone who does accepts that there are things that other people regard as blessed that you don't.
  • KoF wrote: »
    To be fair, I'm not sure it has ever happened before for something intangible, prior beliefs about the internet seemed to be related to hope

    How are you differentiating this from a lot of what normally gets classed as investment?
  • ChastMastr wrote: »
    I do think that generative AI like ChatGPT (and its visual equivalents) is very dangerous for all sorts of reasons (not least of which include replacing people who make art of all kinds, but also spreading falsehoods and the like).

    Or does it empower people who couldn't previously create art to be able to create it? If I have an idea for a piece of art, but don't have the artistic talents to create it myself, but typing at an AI model allows me to bring my vision to reality, is that a bad thing?
  • It is if it deprives a creative person of work, particularly if they have sweated long and hard to develop their craft.
  • chrisstileschrisstiles Hell Host
    edited October 2024
    ChastMastr wrote: »
    I do think that generative AI like ChatGPT (and its visual equivalents) is very dangerous for all sorts of reasons (not least of which include replacing people who make art of all kinds, but also spreading falsehoods and the like).

    Or does it empower people who couldn't previously create art to be able to create it? If I have an idea for a piece of art, but don't have the artistic talents to create it myself, but typing at an AI model allows me to bring my vision to reality, is that a bad thing?

    It is very unlikely to reproduce your actual vision - what it's actually doing is taking your description of your vision and then trying to re-create in simulacra from bits of art it has stolen from other people.
  • The stealing is the problem—and I don’t see any possibility of getting that particular genie back in the bottle at this point.

    Also environmental cost, and as just mentioned, that it doesn’t in fact reproduce your mental image—assuming you have one, and that’s what we’re were talking about, right? Because if your vision as a would be artist is so vague that you’ll accept whatever randomness it comes up with, I think we’ve left the realm of art—or at least the point where “you “ (meaning the prompt writer) can reasonably claim this is “your” art at all.
  • If AI can translate our ideas into artworks when we have no artistic talent ourselves, I see it as a positive use. That it is using the talents of others to do so is surely no different from someone with talent picking up influences from a visit to an art gallery. Our creativity is a gift from God, there to be shared.

    Painting by numbers gives many people enjoyment, it might be seen as an extension of that.

    If chatGPT helps us to translate our ideas into literature having been imbued with Dickens and Shakespeare and the books of the Bible, I see opportunities for expression and creativity which may have been blocked by our lack of talent in those areas. Someone who has difficulty with spelling or who has a limited vocabulary possesses an imagination too.

    There are enough people spreading misinformation already (with little or no AI input) as well as the increase in scams to have begun to alert us to become more cautious as to what is or is not the truth, and whether or not who we are speaking to is really a human being.

    Education from now on needs to keep up - as ever.

    I see no spiritual or superstitious connection with this.
  • This is a long screed by Dario Amodei. Who is an AI thinker and the man behind the company that runs the AI LLM called Claude.ai

    There's a lot to read.

    https://darioamodei.com/machines-of-loving-grace
  • KoF wrote: »
    This is a long screed by Dario Amodei. Who is an AI thinker and the man behind the company that runs the AI LLM called Claude.ai

    There's a lot to read.

    https://darioamodei.com/machines-of-loving-grace

    All of that is contingent on his basic assumptions on what the industry might be able to create in the short term (5-10 years), and after giving the sci-fi baggage disclaimer, he's defines this as:
    In terms of pure intelligence, it is smarter than a Nobel Prize winner across most relevant fields – biology, programming, math, engineering, writing, etc. This means it can prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc.
  • It also smacks of eugenics. It smacks of a lot of things, but the eugenics stood out for me.
  • Anybody who thinks AI is only a few years from writing "extremely good novels" doesn't understand either AI OR novels, it seems to me.
  • Anybody who thinks AI is only a few years from writing "extremely good novels" doesn't understand either AI OR novels, it seems to me.

    Additionally:
    The resources used to train the model can be repurposed to run millions of instances of it

    We could summarize this as a “country of geniuses in a datacenter”.

    Ignoring the 'imagine a wizard and magic' aspect. This is positing that you can keep millions of above human level intelligences inside a box and have them work for you, it's some weird form of digital slavery where - strangely - the super intelligences don't have any agency of their own.
  • The story of the 'Scorcerer's Apprentice' ineluctably springs to mind.
    Oh dear!
  • I read a lot more of that article and man, he's basically discussing a ladder to the moon--one which starts a mile off the ground. He hand-waves that bit, expecting we'll get to that point in a reasonable amount of time. He's much more interested in discussing what happens once we get TO the moon...
  • I read a lot more of that article and man, he's basically discussing a ladder to the moon--one which starts a mile off the ground.

    Because it's amusing, here is Randal Munroe (the xkcd guy) discussing the logistics involved in climbing a hypothetical fire pole between the Moon and Earth.

    One of the things that I rarely see discussed is the way a lot of AI and similar computer automations are mechanical Turks, requiring human oversight and correction to function properly. The most obvious example is "self-driving" cars, which require input from the driver in tricky situations. In some cases a human is correcting the car remotely without the knowledge of the occupants.
  • @Crœsos said
    In some cases a human is correcting the car remotely without the knowledge of the occupants.

    Say what?? 😮
  • Crœsos wrote: »
    One of the things that I rarely see discussed is the way a lot of AI and similar computer automations are mechanical Turks, requiring human oversight and correction to function properly.

    Another obvious example being the 'Optimus' robots that Tesla recently showcased - which were largely guided by humans when doing anything more complicated that moving around:

    https://arstechnica.com/ai/2024/10/reports-teslas-prototype-optimus-robots-were-controlled-by-humans/
  • Arthur Clarke, a smart guy, wrote a novel about the idea of an elevator up to a satellite. It required having super-string materials and finding a location on Earth (on land) directly below the satellite (this is nearly impossible).
  • Baptist TrainfanBaptist Trainfan Shipmate
    edited October 2024
    Presumably a geo-stationary satellite? There would be a lot of steps to climb if the elevator broke down ...
  • CaissaCaissa Shipmate
    Here's the wiki article on space elevators. https://en.wikipedia.org/wiki/Space_elevator
  • As Clarke points out, the satellite would actually be a bit farther out than if it did not have to support the weight of the cable.
Sign In or Register to comment.