I agree, @Pomona - I was thinking the same. These can be men who see themselves as very chivalrous, but they expect the women to whom they are chivalrous to be quiet and agreeable! They also like to be the ones explaining things to woman (all part of the chivalry idea - that they in their superior world understanding and intelligence are graciously teaching women!) so it doesn't sit well with them when a woman knows more about a topic than they do, and challenges something they say. Often the tone goes from patronising to aggressive, while when they are talking to other men, it's more just chatting as to an equal.
Of course, there are the other men who grumble 'Women want equal rights, so I'm never going to offer up my seat or hold open a door again!' As if those were inherently gender-related things, rather than general courtesy to any gender that needs it.
The whole purpose of chivalry was to teach men not to be jerks to anyone perceived as "weaker," including women. This implies that there's a very strong impulse for men to be jerks to women, and any such person.
If you need to make a big dramatic rule about "being super respectful to those of category X" that would seem to imply a preexisting problem of people being super disrespectful to those of category X.
This might be another reason to choose to be detached. People get attached to a desire to be powerful, they want to be seen to be strong, to impress people with their might or or whatever virtues they possess (pun intended if you know Latin) and so they treat conversation like a competition. And it's easy for some folks, particularly women or people who are trained to be deferential, to get overlooked and over-talked.
My spouse reports having had this experience often, in many contexts. So, yeah. I do think "shut up and listen" is another monastic virtue that everyone could afford to cultivate, including myself, even as someone who often struggles IRL to figure out how to insert words into a real time conversation - yes, one reason I tend to type a lot.
Yes and no. I think I obscured a significant distinction in the metaphor.
There's another sense in which the metaphor can be interpreted in that these various different forms of medium attenuate and morph the aspects of ourselves that can be communicated, and it seems to me that for most people most of the time it's this rather than digital fidelity which has a larger effect, with the latter being significant for influencers so-called and the rarer occasions when some artefact goes viral.
Depending on which technologists you listen to, the digital world they foresee is either destined to remain a simulation of reality (albeit an increasingly sophisticated one) or has the potential to become a new kind of reality.
This new kind of reality is typically distinguished by the attainability of intelligence and consciousness, maybe as emergent properties, maybe through intentional design. The process is usually hazy, but one thing these technologists have in common is a computationalist outlook
I'm not quite clear about the connection between what I said and this, and I'm reasonably confident that this doesn't actually represent a real choice (regardless of the fantasies of some of my more excitable coworkers), even if one were to take a purely materialist stance we don't really understand intelligence, consciousness etc at the level required to replicate it.
I'm assuming you don't think it's realistic either, so I'm not sure of the purpose of the speculation?
Our responses thus far illustrate two ways of looking at the issue. From one perspective, the issue is how digital processing, transmission and storage affect aspects of real-world communication between human beings, and how this affects our real-world self-conceptions and interactions. I'm suggesting there's another perspective, in which the cumulative, aggregated effect of all this interconnected digitality can be conceived as the beginnings of a "digital world", which may come to sustain digital beings (native digital beings, transformed humans or hybrids).
I don't know if this is realistic. What I do know is that a significant number of well-connected and well-funded people seem to think it's realistic, and are actively working towards and/or anticipating a point when it becomes "real". Much of this is (ultimately) driven by the desire for profit, and for the visionaries, the desire to understand the unknown and imagine the impossible. These two groups don't traditionally spend a lot of time thinking about the unintended consequences for the rest of us (although people working in other fields do).
One reason for speculating is that technology is changing our world rapidly, and we don't know what's going to happen next. I suggest that the alternative to speculation is thinking about change after it's already started to happen. Google, Amazon and Facebook are examples of how that works out.
By way of a thought experiment, if it were possible to upload your consciousness to such a reality, would you consider it?
I enjoy Greg Egan's books, but that form of substrate independent continuity of being seems unlikely.
Likelihood and enjoyment aside, I understand that his books explore some of the issues. The points ChastMastr raises in his response look relevant to me (thanks ChastMastr).
Even if we wouldn't want to be part of a digital world except as visitors, our reasoning has a bearing on how we think about this present world and reality, its meaning (if any) and what matters to us. It's also possible to think about a digital world as an alternate version of the world to come, one which encapsulates the desires of a significant number of people, prepared to accept a future without mystery (in preference to a traditional vision of a new heaven and new earth).
And thinking about our moral responsibility regarding digital beings, there are already plenty of ethical issues concerning the digital world as it is right now - automated decision-making (including by AI) is rapidly spreading into many areas traditionally assumed to involve ethical consideration. This includes real-time ethical decisions of life and death, such as made by AI-operated drones and self-driving cars. Given the amount of progress that *isn't* being made on these questions, it might help to look at them from a different perspective, in a different framing.
Even if it doesn't come to pass, thinking about a theoretical or imaginary digital world and its digital inhabitants is a way of thinking about how digital technology will change the real world and the real people in it.
I'm suggesting there's another perspective, in which the cumulative, aggregated effect of all this interconnected digitality can be conceived as the beginnings of a "digital world", which may come to sustain digital beings (native digital beings, transformed humans or hybrids).
Sure, and the exploration of that other perspective has a long history in forms of fiction and - to extent - bits of philosophy.
I'm just not sure that the way to prepare ourselves for the future is to listen to the public fantasies of tech billionaires:
I don't know if this is realistic. What I do know is that a significant number of well-connected and well-funded people seem to think it's realistic, and are actively working towards and/or anticipating a point when it becomes "real". Much of this is (ultimately) driven by the desire for profit, and for the visionaries, the desire to understand the unknown and imagine the impossible. These two groups don't traditionally spend a lot of time thinking about the unintended consequences for the rest of us (although people working in other fields do).
One reason for speculating is that technology is changing our world rapidly, and we don't know what's going to happen next. I suggest that the alternative to speculation is thinking about change after it's already started to happen. Google, Amazon and Facebook are examples of how that works out.
But whether this is realistic is a fairly important question, because it's the difference between looking at the PR and claims of company founders vs looking at the plans their companies are actually making, because it is the latter that actually has an impact on how the world changes, whereas the former is largely used to dissuade the gullible (including politicians) into looking at the very real impacts their plans are likely to cause.
Just as Star Trek inspired at least one generation of scientists and engineers, fiction (especially science/speculative fiction) continues to inspire technologists. Being familiar with the fiction that inspires Big Tech visionaries, designers and developers can give us an insight into the way they think, and the issues they think (and don't think) about.
Snow Crash, a 1992 novel by Neal Stephenson, is an oft-cited example:
Many virtual globe programs, including NASA World Wind and Google Earth, bear a resemblance to the "Earth" software developed by the CIC in Snow Crash. One Google Earth co-founder claimed that Google Earth was modeled after Snow Crash, while another co-founder said that it was inspired by Powers of Ten. Stephenson later referenced this in another of his novels, Reamde.
Stephenson's concept of the Metaverse has enjoyed continued popularity and influence in high-tech circles (especially Silicon Valley) ever since the publication of Snow Crash. As a result, Stephenson has become "a sought-after futurist" and has worked as a futurist for Blue Origin and Magic Leap.
...
The online virtual worlds Active Worlds and Second Life were both directly inspired by the Metaverse in Snow Crash.
Former Microsoft Chief Technology Officer J Allard and former Xbox Live Development Manager Boyd Multerer claimed to have been heavily inspired by Snow Crash in the development of Xbox Live, and that it was a mandatory read for the Xbox development team.
And Snow Crash appears to continue to be a mandatory read for many people working in the sector.
Back on earth, scientists themselves seem pretty relaxed about the way that AI is enabling new scientific discoveries. For example, How AI Is Shaping Scientific Discovery.
I don't think the hard distinction between "the claims of company founders vs looking at the plans their companies are actually making" is significant in practice. For one thing, publicly-listed companies, and the individuals at their heart, have to be pretty up-front about what their plans are #. And in practice, "whether this is realistic" is just another way of asking "whether it's going to be profitable".
When they talk about the future, Big Tech leaders are primarily talking to their investors and creditors, not politicians. And the narratives they are spinning form an important part of whether or not people continue to invest. Articulating a profitable vision of the future is what keeps capitalism going. The ways it changes humanity (and the planet itself) are side-effects, or externalities.
# Elon Musk's companies vary, but the private companies still have private investors and/or creditors.
Just as Star Trek inspired at least one generation of scientists and engineers, fiction (especially science/speculative fiction) continues to inspire technologists. Being familiar with the fiction that inspires Big Tech visionaries, designers and developers can give us an insight into the way they think, and the issues they think (and don't think) about.
That's a significantly watered down claim from the idea that we have to take this seriously:
"This new kind of reality is typically distinguished by the attainability of intelligence and consciousness, maybe as emergent properties, maybe through intentional design. The process is usually hazy, but one thing these technologists have in common is a computationalist outlook, in that they believe (or at least behave as though they believe) that the human mind is an information processing system and that cognition and consciousness together are a form of computation. And, less philosophically, that the world can be understood as a computational process, with people as subprocesses."
Star Trek may have inspired a lot of people, but the future we had to prepare ourselves for wasn't ultimately holodecks, teleporters and a post scarcity Federation.
Back on earth, scientists themselves seem pretty relaxed about the way that AI is enabling new scientific discoveries. For example, How AI Is Shaping Scientific Discovery.
Yeah, and examination of that article shows it's actually worlds apart from what the founders were talking about, rather than using an LLM which had hoovered up the contents of a library the reference is to using an AI seeded with specific ideas about quantum physics to do a somewhat directed search around a fairly limited context.
In fact as the article goes on:
"AI is advancing science in a range of ways — identifying meaningful trends in large datasets, predicting outcomes based on data, and simulating complex scenarios"
A lot of goes under the label of AI are actually statistical techniques which are now feasible given the sudden access to lots of computing power (and the availability of frameworks that allow access to that power to the non computer-scientist).
I don't think the hard distinction between "the claims of company founders vs looking at the plans their companies are actually making" is significant in practice. For one thing, publicly-listed companies, and the individuals at their heart, have to be pretty up-front about what their plans are #. And in practice, "whether this is realistic" is just another way of asking "whether it's going to be profitable".
One needs to differentiate between public rhetoric and financial plans, Peter Thiel believes he can literally immanentize the eschaton ( https://youtu.be/2YVHC-2vkMQ ), but the people he funds and Palantir's operations are somewhat disjoint from that.
Just as Star Trek inspired at least one generation of scientists and engineers, fiction (especially science/speculative fiction) continues to inspire technologists. Being familiar with the fiction that inspires Big Tech visionaries, designers and developers can give us an insight into the way they think, and the issues they think (and don't think) about.
That's a significantly watered down claim from the idea that we have to take this seriously:
"This new kind of reality is typically distinguished by the attainability of intelligence and consciousness, maybe as emergent properties, maybe through intentional design. The process is usually hazy, but one thing these technologists have in common is a computationalist outlook, in that they believe (or at least behave as though they believe) that the human mind is an information processing system and that cognition and consciousness together are a form of computation. And, less philosophically, that the world can be understood as a computational process, with people as subprocesses."
Only if you consider fiction and speculation to be the same thing.
The more seriously you take speculation, the more value you gain from investing in thought experiments. This isn't the same as committing yourself to believing in them. I think taking speculation seriously is a skill that's worth developing if you want to think about the future. But waiting to see what happens is also an option.
One way of preparing for the financial future in capitalist societies is also called speculation. Most people treated bitcoin as though it were fiction when it first appeared. Early adopters who speculated on the strength of the narrative now have a valuable digital world asset that can also be used in the real world. (And with 21 Futures, it might have come full circle.)
Star Trek may have inspired a lot of people, but the future we had to prepare ourselves for wasn't ultimately holodecks, teleporters and a post scarcity Federation.
The use of the past tense is intriguing - people are still working on bringing these things about, and most of the future then is still in the future now. (Which is something of a truism.)
Whether scarcity is a problem with a technological (or engineering) solution is another question. I'm inclined to think scarcity is currently mostly down to inequality.
Back on earth, scientists themselves seem pretty relaxed about the way that AI is enabling new scientific discoveries. For example, How AI Is Shaping Scientific Discovery.
Yeah, and examination of that article shows it's actually worlds apart from what the founders were talking about, rather than using an LLM which had hoovered up the contents of a library the reference is to using an AI seeded with specific ideas about quantum physics to do a somewhat directed search around a fairly limited context.
In fact as the article goes on:
"AI is advancing science in a range of ways — identifying meaningful trends in large datasets, predicting outcomes based on data, and simulating complex scenarios"
A lot of goes under the label of AI are actually statistical techniques which are now feasible given the sudden access to lots of computing power (and the availability of frameworks that allow access to that power to the non computer-scientist).
Indeed. The concept of AI being either artificial or intelligent is something of a fiction.
And you may see these things as being worlds apart. But I'm not primarily concerned whether *I* see these things as being worlds apart. I continue to think a more relevant question is how the people building these worlds see them. I think there is benefit in trying to see these future worlds from the perspective(s) of people who have particularly well-resourced visions, rather than just asking ourselves how realistic we think they are. To ask a related question, how confident are you that the people investing in these visions are going to come to their senses?
I don't think the hard distinction between "the claims of company founders vs looking at the plans their companies are actually making" is significant in practice. For one thing, publicly-listed companies, and the individuals at their heart, have to be pretty up-front about what their plans are #. And in practice, "whether this is realistic" is just another way of asking "whether it's going to be profitable".
One needs to differentiate between public rhetoric and financial plans, Peter Thiel believes he can literally immanentize the eschaton ( https://youtu.be/2YVHC-2vkMQ ), but the people he funds and Palantir's operations are somewhat disjoint from that.
Peter Thiel is not the majority shareholder of Palantir - he holds around 7.2% of stock. But his desire to revitalise apocalyptic thinking seems pertinent to his support and funding of many aspects of the digital world, such as AI, particularly the Singularity. ("A hypothetical point in time at which technological growth becomes completely alien to humans, uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization.")
Why do we need to compartmentalise our thinking, our perspectives?
"After validating a number of other physical systems with known solutions, the researchers fed videos of systems for which they did not know the explicit answer. The first videos featured an “air dancer” undulating in front of a local used car lot. "
"The researchers believe that this sort of AI can help scientists uncover complex phenomena for which theoretical understanding is not keeping pace with the deluge of data"
So again, a data analysis problem, not the creation of 'alternate physics'.
I continue to think a more relevant question is how the people building these worlds see them. I think there is benefit in trying to see these future worlds from the perspective(s) of people who have particularly well-resourced visions, rather than just asking ourselves how realistic we think they are. To ask a related question, how confident are you that the people investing in these visions are going to come to their senses?
I think they'll continue till they are stopped or their money runs out, I don't see why that's any more reason to take their justifications seriously, as opposed to look at how they actually choose to speculate financially to adopt your vocabulary
Peter Thiel is not the majority shareholder of Palantir - he holds around 7.2% of stock. But his desire to revitalise apocalyptic thinking seems pertinent to his support and funding of many aspects of the digital world, such as AI, particularly the Singularity. ("A hypothetical point in time at which technological growth becomes completely alien to humans, uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization.")
Except that "these people are terrified of death and believe in a hierarchy of humanity" has more predictive power than 'the Singularity' (which in their frame is purely an instrumental means of attaining immortality anyway, in the same way that Grok is better explained via psychodrama rather than a search for AGI).
Comments
Of course, there are the other men who grumble 'Women want equal rights, so I'm never going to offer up my seat or hold open a door again!' As if those were inherently gender-related things, rather than general courtesy to any gender that needs it.
la vie en rouge, Purgatory host
If you need to make a big dramatic rule about "being super respectful to those of category X" that would seem to imply a preexisting problem of people being super disrespectful to those of category X.
This might be another reason to choose to be detached. People get attached to a desire to be powerful, they want to be seen to be strong, to impress people with their might or or whatever virtues they possess (pun intended if you know Latin) and so they treat conversation like a competition. And it's easy for some folks, particularly women or people who are trained to be deferential, to get overlooked and over-talked.
My spouse reports having had this experience often, in many contexts. So, yeah. I do think "shut up and listen" is another monastic virtue that everyone could afford to cultivate, including myself, even as someone who often struggles IRL to figure out how to insert words into a real time conversation - yes, one reason I tend to type a lot.
Are we back on topic now?
I don't know if this is realistic. What I do know is that a significant number of well-connected and well-funded people seem to think it's realistic, and are actively working towards and/or anticipating a point when it becomes "real". Much of this is (ultimately) driven by the desire for profit, and for the visionaries, the desire to understand the unknown and imagine the impossible. These two groups don't traditionally spend a lot of time thinking about the unintended consequences for the rest of us (although people working in other fields do).
One reason for speculating is that technology is changing our world rapidly, and we don't know what's going to happen next. I suggest that the alternative to speculation is thinking about change after it's already started to happen. Google, Amazon and Facebook are examples of how that works out.
Likelihood and enjoyment aside, I understand that his books explore some of the issues. The points ChastMastr raises in his response look relevant to me (thanks ChastMastr).
Even if we wouldn't want to be part of a digital world except as visitors, our reasoning has a bearing on how we think about this present world and reality, its meaning (if any) and what matters to us. It's also possible to think about a digital world as an alternate version of the world to come, one which encapsulates the desires of a significant number of people, prepared to accept a future without mystery (in preference to a traditional vision of a new heaven and new earth).
And thinking about our moral responsibility regarding digital beings, there are already plenty of ethical issues concerning the digital world as it is right now - automated decision-making (including by AI) is rapidly spreading into many areas traditionally assumed to involve ethical consideration. This includes real-time ethical decisions of life and death, such as made by AI-operated drones and self-driving cars. Given the amount of progress that *isn't* being made on these questions, it might help to look at them from a different perspective, in a different framing.
Even if it doesn't come to pass, thinking about a theoretical or imaginary digital world and its digital inhabitants is a way of thinking about how digital technology will change the real world and the real people in it.
Sure, and the exploration of that other perspective has a long history in forms of fiction and - to extent - bits of philosophy.
I'm just not sure that the way to prepare ourselves for the future is to listen to the public fantasies of tech billionaires:
https://gizmodo.com/billionaires-convince-themselves-ai-is-close-to-making-new-scientific-discoveries-2000629060
But whether this is realistic is a fairly important question, because it's the difference between looking at the PR and claims of company founders vs looking at the plans their companies are actually making, because it is the latter that actually has an impact on how the world changes, whereas the former is largely used to dissuade the gullible (including politicians) into looking at the very real impacts their plans are likely to cause.
Snow Crash, a 1992 novel by Neal Stephenson, is an oft-cited example: And Snow Crash appears to continue to be a mandatory read for many people working in the sector.
Back on earth, scientists themselves seem pretty relaxed about the way that AI is enabling new scientific discoveries. For example, How AI Is Shaping Scientific Discovery.
I don't think the hard distinction between "the claims of company founders vs looking at the plans their companies are actually making" is significant in practice. For one thing, publicly-listed companies, and the individuals at their heart, have to be pretty up-front about what their plans are #. And in practice, "whether this is realistic" is just another way of asking "whether it's going to be profitable".
When they talk about the future, Big Tech leaders are primarily talking to their investors and creditors, not politicians. And the narratives they are spinning form an important part of whether or not people continue to invest. Articulating a profitable vision of the future is what keeps capitalism going. The ways it changes humanity (and the planet itself) are side-effects, or externalities.
# Elon Musk's companies vary, but the private companies still have private investors and/or creditors.
That's a significantly watered down claim from the idea that we have to take this seriously:
Star Trek may have inspired a lot of people, but the future we had to prepare ourselves for wasn't ultimately holodecks, teleporters and a post scarcity Federation.
Yeah, and examination of that article shows it's actually worlds apart from what the founders were talking about, rather than using an LLM which had hoovered up the contents of a library the reference is to using an AI seeded with specific ideas about quantum physics to do a somewhat directed search around a fairly limited context.
In fact as the article goes on:
"AI is advancing science in a range of ways — identifying meaningful trends in large datasets, predicting outcomes based on data, and simulating complex scenarios"
A lot of goes under the label of AI are actually statistical techniques which are now feasible given the sudden access to lots of computing power (and the availability of frameworks that allow access to that power to the non computer-scientist).
One needs to differentiate between public rhetoric and financial plans, Peter Thiel believes he can literally immanentize the eschaton ( https://youtu.be/2YVHC-2vkMQ ), but the people he funds and Palantir's operations are somewhat disjoint from that.
The more seriously you take speculation, the more value you gain from investing in thought experiments. This isn't the same as committing yourself to believing in them. I think taking speculation seriously is a skill that's worth developing if you want to think about the future. But waiting to see what happens is also an option.
One way of preparing for the financial future in capitalist societies is also called speculation. Most people treated bitcoin as though it were fiction when it first appeared. Early adopters who speculated on the strength of the narrative now have a valuable digital world asset that can also be used in the real world. (And with 21 Futures, it might have come full circle.)
The use of the past tense is intriguing - people are still working on bringing these things about, and most of the future then is still in the future now. (Which is something of a truism.)
Whether scarcity is a problem with a technological (or engineering) solution is another question. I'm inclined to think scarcity is currently mostly down to inequality.
Indeed. The concept of AI being either artificial or intelligent is something of a fiction.
And you may see these things as being worlds apart. But I'm not primarily concerned whether *I* see these things as being worlds apart. I continue to think a more relevant question is how the people building these worlds see them. I think there is benefit in trying to see these future worlds from the perspective(s) of people who have particularly well-resourced visions, rather than just asking ourselves how realistic we think they are. To ask a related question, how confident are you that the people investing in these visions are going to come to their senses?
Meanwhile, how many worlds away are these researchers?
https://www.engineering.columbia.edu/about/news/columbia-engineering-roboticists-discover-alternative-physics
Peter Thiel is not the majority shareholder of Palantir - he holds around 7.2% of stock. But his desire to revitalise apocalyptic thinking seems pertinent to his support and funding of many aspects of the digital world, such as AI, particularly the Singularity. ("A hypothetical point in time at which technological growth becomes completely alien to humans, uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization.")
Why do we need to compartmentalise our thinking, our perspectives?
Digging into the details behind the headlines:
"After validating a number of other physical systems with known solutions, the researchers fed videos of systems for which they did not know the explicit answer. The first videos featured an “air dancer” undulating in front of a local used car lot. "
"The researchers believe that this sort of AI can help scientists uncover complex phenomena for which theoretical understanding is not keeping pace with the deluge of data"
So again, a data analysis problem, not the creation of 'alternate physics'.
I think they'll continue till they are stopped or their money runs out, I don't see why that's any more reason to take their justifications seriously, as opposed to look at how they actually choose to speculate financially to adopt your vocabulary
Except that "these people are terrified of death and believe in a hierarchy of humanity" has more predictive power than 'the Singularity' (which in their frame is purely an instrumental means of attaining immortality anyway, in the same way that Grok is better explained via psychodrama rather than a search for AGI).
Perhaps we should look at 'AI and asceticism'?
Could eremetics programme AI to do their asceticism for them?