I think the rollout for Alexa+ is not instantaneous. They are allowing the later models of their Echo platform acquire it first. I think I saw as of May there were 1,000,000 users. I have an earlier vision of echo so have yet to get it. My Pixel smartphone has it, though.
Hmmmm .... how about feeding AI that infamous verse from Psalm 137. Scottish metrical version:
'Oh blessed may that trooper be,
When riding on his naggie,
Takes their wee bairns by t' toes
And dings them on the craggie'.
@RockyRoger I’ve seen that quoted a few times, but never found it in any source. I can see how it echoes the psalm, but I suspect it has never had any official status.
The quote is in in C S Lewis's 'Reflections on the Psalms'. I don't think he says where he got it from. But it's a gem, you will agree.
Hmm. I can’t track it down in my edition of Reflections. Do you know what chapter it is in?
It's not. Using the wonders of technology and ebooks, which people are grumbling about, you can do a search and see for definite that it's not there. The text is available free on Faded Page online. Here, click on the HTML version and search for naggie or toes or any other word from the quote.
Oh dear! Sorry for my miss-remembering. But I'm pretty sure I did read it Lewis. Trouble is, having read practically all of Lewis, it could be anywhere. 'Letters to Malcolm'? Which one of his talks or articles? The search continues .... I wonder if AI can help?
People used to say GIGO. Garbage in, garbage out. Got ask good questions.
It is an idiot about some things that aren't part of the main culture. It doesn't know much where my people are from and our ways. They got us using Duck.ai which is GPT instead of the brandname ChatGPT. The ChatGPT one tracks you bad and profiles you. We get profiled already as brown people so don't want that.
I noticed at a state school the education department had their own AI engine. It gave a partial answer, but then followed it up with something like "You may want to research further by..."
The kids circumvented this by posting its suggested research back in where it gave its answer and then suggested further research!
A *very* brief online search suggests that the quote's from Letters to an American Lady as:
Thank you for your most kind and encouraging letter.
Old Scottish version of Psalm 137:8-
O blessed may that trooper be
Who, riding on his naggie,
Wull tak thy wee bairns by the taes
And ding them on the craggie.
A completely offline search of the physical book reveals that it's the postscript to a letter Lewis wrote to Mary (Sept. 30 1958) commiserating with her about a trip to a dentist to have a tooth extracted. In his letters - written to individuals rather than for a wider audience - Lewis unsurprisingly seems rather more concerned with the well-being of the individual to whom he's writing than the provenance of his quotes. (In the copy of Letters I used, it seems he misattributed it as Psalm 136.)
This puts me in mind of the contrast between the non-digital experience of looking things up in books and writing letters to people we never meet, and the digital experience of doing these things online.
And thinking about the frameworks (regulatory and others) which shape these experiences:
A US judge has ruled that Anthropic, maker of the Claude chatbot, use of books to train its artificial intelligence system – without permission of the authors – did not breach copyright law. Judge William Alsup compared the Anthropic model’s use of books to a “reader aspiring to be a writer.”
To me, this looks like a legal suggestion (if not a legal opinion) that AI is in some way like a human. It strikes me that the perceived quality of being human-like is potentially quite significant, both to the (regulatory) future of AI, but also more immediately to the way that people use it, being the subject of this thread.
Thinking about the question of the OP from a different angle, I've been asking myself whether there any tasks or activities that I currently avoid, that I might be more encouraged to attempt if I had a human assistant with whom I could discuss them or who could talk me through them.
In short, the purpose and character of using copyrighted works to train LLMs [large language models] to generate new text was quintessentially transformative. Like any reader aspiring to be a writer, Anthropic’s LLMs trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different. If this training process reasonably required making copies within the LLM or otherwise, those copies were engaged in a transformative use.
I think the court’s references to “training” and “process,” especially read in the larger context of the overall opinion, make it clear that the court is talking about how programmers used copyrighted works to “train” the chatbot, not about how the chatbot itself may “think” or otherwise act like a human. The defendant in the case is, after all, Anthropic, the company that developed the chatbot, not the chatbot itself.
Thinking about the question of the OP from a different angle, I've been asking myself whether there any tasks or activities that I currently avoid, that I might be more encouraged to attempt if I had a human assistant with whom I could discuss them or who could talk me through them.
Human help in this area has been very helpful to me in the past. But for me, sporadic and intermittent, and also something I'm mindful I don't want to take advantage of friends with. Of course, if you were able to pay an assistant to do it, that would be different. I've found journalling can help too, getting things down on paper, to look at and sort out visually, but this also requires the consistency of motivation to keep journalling regularly.
So for me, AI is a point in between - unlike journalling, you have the external force of another presence interacting with what you say, which can break through inertia, but the detailed, repetitive sort of one-way support you can use it for is very different from the casual two-way support real life friends might offer each other. And if it's not being helpful, you can tell it quite openly without it taking offence!
On this topic, I spotted this link earlier: https://the.vane.fyi/p/golem-watch-001 re the section "AI assistants are fiction engines", which riffs on the fact when a prompt requests ChatGPT/Claude/etc to act out a particular role, the parameters of that role are often coming from fiction.
On this topic, I spotted this link earlier: https://the.vane.fyi/p/golem-watch-001 re the section "AI assistants are fiction engines", which riffs on the fact when a prompt requests ChatGPT/Claude/etc to act out a particular role, the parameters of that role are often coming from fiction.
That section, the beginning with Allyson believing she had "discover[ed] interdimensional communication" in particular, was very concerning. And sad in her case as the article states.
The earlier section on photography trickery and Photoshop was interesting. I hadn't really considered them and what similarities/differences they may have with the truths and falsities of AI.
In short, the purpose and character of using copyrighted works to train LLMs [large language models] to generate new text was quintessentially transformative. Like any reader aspiring to be a writer, Anthropic’s LLMs trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different. If this training process reasonably required making copies within the LLM or otherwise, those copies were engaged in a transformative use.
I think the court’s references to “training” and “process,” especially read in the larger context of the overall opinion, make it clear that the court is talking about how programmers used copyrighted works to “train” the chatbot, not about how the chatbot itself may “think” or otherwise act like a human. The defendant in the case is, after all, Anthropic, the company that developed the chatbot, not the chatbot itself.
Thanks for the link to the opinion, Nick Tamen. I agree with your general point, but it still seems odd to me to introduce a simile that blurs the boundary between aspiring to write like human authors, and aspiring to create a tool that relies on being able to write like human authors.
That aside, it seems to me likely that, sooner or later, the humanness of AI will become a legal issue. Which is probably for another thread.
My friend ran her rough version of the minutes of a meeting through Chat GPT. She asked for it to look more professional and use complete sentences. She said it saved a lot of time and turned out better than anything she could have done on her own.
Comments
Oh dear! Sorry for my miss-remembering. But I'm pretty sure I did read it Lewis. Trouble is, having read practically all of Lewis, it could be anywhere. 'Letters to Malcolm'? Which one of his talks or articles? The search continues .... I wonder if AI can help?
[Having read it:]
It appears someone else, too. I actually don't have Letters to an American Lady. Though I do have his collected letters, so it was probably in there.
It is an idiot about some things that aren't part of the main culture. It doesn't know much where my people are from and our ways. They got us using Duck.ai which is GPT instead of the brandname ChatGPT. The ChatGPT one tracks you bad and profiles you. We get profiled already as brown people so don't want that.
The kids circumvented this by posting its suggested research back in where it gave its answer and then suggested further research!
Repeat.
This puts me in mind of the contrast between the non-digital experience of looking things up in books and writing letters to people we never meet, and the digital experience of doing these things online.
And thinking about the frameworks (regulatory and others) which shape these experiences: To me, this looks like a legal suggestion (if not a legal opinion) that AI is in some way like a human. It strikes me that the perceived quality of being human-like is potentially quite significant, both to the (regulatory) future of AI, but also more immediately to the way that people use it, being the subject of this thread.
Thinking about the question of the OP from a different angle, I've been asking myself whether there any tasks or activities that I currently avoid, that I might be more encouraged to attempt if I had a human assistant with whom I could discuss them or who could talk me through them.
Human help in this area has been very helpful to me in the past. But for me, sporadic and intermittent, and also something I'm mindful I don't want to take advantage of friends with. Of course, if you were able to pay an assistant to do it, that would be different. I've found journalling can help too, getting things down on paper, to look at and sort out visually, but this also requires the consistency of motivation to keep journalling regularly.
So for me, AI is a point in between - unlike journalling, you have the external force of another presence interacting with what you say, which can break through inertia, but the detailed, repetitive sort of one-way support you can use it for is very different from the casual two-way support real life friends might offer each other. And if it's not being helpful, you can tell it quite openly without it taking offence!
That section, the beginning with Allyson believing she had "discover[ed] interdimensional communication" in particular, was very concerning. And sad in her case as the article states.
The earlier section on photography trickery and Photoshop was interesting. I hadn't really considered them and what similarities/differences they may have with the truths and falsities of AI.
Thank you for the interesting article.
Nenya - All Saints Host
That aside, it seems to me likely that, sooner or later, the humanness of AI will become a legal issue. Which is probably for another thread.