Jump to content

User:RD/9k/Q618-AIParadoxDignum: Difference between revisions

From LithoGraphica
Reversedragon (talk | contribs)
copy markup from Q19,10
 
Reversedragon (talk | contribs)
Strikes must be presented as pushing for a balanced symbiosis between old things and new things
 
(4 intermediate revisions by the same user not shown)
Line 3: Line 3:
{{HueCSS}}<ol class="hue clean">
{{HueCSS}}<ol class="hue clean">


{{li|I=Z1/HAS|Q=618|Q2=618}}{{book|The AI Paradox}} (Dignum)  ->  'paradox' is one of the most clickbait words you could put in a book title. clickbait for libraries.
{{li|I=Z1/LLM/HAS|Q=618|Q2=618}}{{book|The AI Paradox}} (Dignum 2026)  ->  'paradox' is one of the most clickbait words you could put in a book title. clickbait for libraries.


</li></ol>
</li></ol>
Line 10: Line 10:
<ol class="hue clean">
<ol class="hue clean">


{{li|start=y|I=S2/HAS|Q=618}}?
{{li|start=y|I=S2/LLM/HAS|Q=618|ww=AIParadoxDignum|pp=2}}Humans possess moral and ethical discernment  ->  I think that statement is actually worth doubting. maybe morality is an illusion and we're all just as bad at it as an AI.
 
{{li|I=F2/LLM/HAS|Q=618}}Inanimate objects lack the capacity for courage, honesty, and empathy / Inanimate objects have no courage / Inanimate objects have no honesty / Inanimate objects have no empathy
{{li|I=S2/LLM/HAS|Q=618|ww=AIParadoxDignum|pp=2}}LLMs lack the capacity for courage, honesty, and empathy  ->  this one is... odd. it really depends on what you define virtues and moral actions {{em|to be}}. like, there are going to be some moments where we could pull out "wizard of oz" definitions and say that even if a particular object "doesn't have a heart" it still produced results that were consistent with courage, honesty, or empathy, and deserves the Wizard's medal or diploma.<br/>
let's shift the discussion over from LLMs to different kinds of inanimate objects, like political parties. a political party as a whole is inanimate; it doesn't have a unique new mind of its own. this is why Marxists speak of political parties as "superstructural" within the "base-to-superstructure" process — as a whole object, a political party is an inanimate object generated out of a group of living beings based partly on their [[E:shovel dream (meta-Marxism)|distorted experiences of being in that object]], warped by [[E:shovel (meta-Marxism)|the shape of the object they're making]] as they're in the process of making it. but although a political party is inanimate, we want it to have good values and make good decisions. if a political party makes good decisions, we consider it to have done a good thing much as we do an individual person. so do you really have to be human to have courage, honesty, or empathy? I genuinely doubt it. if Che Guevara is out there with a band of rebels trying to take back a country for the peasants, the group as a whole is inanimate, but it's hard not to say it has courage.
 
{{li|I=S2/LLM/HAS|Q=618|ww=AIParadoxDignum|pp=3}}Humans will always be ahead of {{TTS|LLMs|large language models}} because it always takes human abilities to improve them  ->  is it just me or does this have really nasty undertones? {{em|this}} statement feels like my tangent about Communist parties totally applies. it feels like the kind of person who would say this would also insist that Communist parties are incapable of designing society because "animate" individuals created them and as an inanimate object they will never rise above the power of the individual. cue a lot of [[EC:9k/RD/Q50,98|Kantian nonsense]] about how society strictly comes from many individuals doing the same thing separately in parallel and doesn't 'in fact' come from the existence of individuals forming larger structures or phenomena.
 
{{li|I=S2/PT|Q=618}}Humans will always be ahead of {{TTS|LLMs|large language models}} because it always takes human abilities to improve them  ->  there's also like... oh god. there's an implication here that if you have a few founders who created towns and industries and some workers that come work in them afterward but will never create whole industries that the proletariat is inherently worse than the bourgeoisie and the world doesn't need it because humanity is at its best when everyone has to constantly invent whole new business territories. nope, I don't like this proposition at all.
 
{{li|I=S2/LLM/HAS|Q=618|ww=AIParadoxDignum|pp=5}}Because AI is created by people, we have power to decide how it's designed and developed / Because AI is created by people, people have power over it  ->  only on page 5 and we're already tossing out "the weasel we" with a totally vague meaning inside. AI is created by people? what people? are the people who created it ever the same people who are trying to reclaim power over it? if they're different people, could that {{em|maybe}} pose a problem?
 
{{li|I=F2/LLM/HAS|Q=618|ww=AIParadoxDignum|pp=8}}{{TTS|LLMs|large language models}} do not possess an ontology of the world  ->  now that's just incorrect. I've seen enough analytic philosophy to know that they all think ontology can only be done through language and the definitions inside language. (and all {{em|philosophy}} period, if you're Wittgenstein!) so no, if an LLM has a sophisticated enough understanding of language, it has an ontology of the world at least as good as a lot of human analytic philosophers. maybe that's saying something rather negative about the philosophers. but honestly, I'd rather accept that maybe they {{em|are}} right and the AI does contain an imperfect ontology. [[E:our world is on fire|our world is on fire]]; we don't really have time to debate about whether ontology "is language" or not. it is. language is second-order logic and logic is langauge. [[EC:9k/RD/Q618-SecularAnimism|some people]] are going to call the relationship between ontology and real ecosystems "biosemiotics" because they can't imagine ontology or naïve dialectical interactions not being language. let's move on.
 
{{li|I=S2/LLM/HAS|Q=618|ww=AIParadoxDignum|pp=10}}AI cannot reason based on ethical principles  ->  I'd push back on this one too. it's definitely not that I {{em|want}} AI being used for this purpose, but I have to recognize that if you build a logic engine like the one I've been building here that puts two propositions or concepts together and spits out another one recognized as the best answer, I'm not too sure a sophisticated enough AI couldn't do the same thing. I mean, I came up with this whole concept based on an LLM that was doing it really badly. they might not be able to do it yet, but yeah, I think it would be foolish to assume that I'm definitely better at something that is combining the work of thousands and thousands of people who wrote a corpus and hundreds of AI experts. I'd have to be pretty full of myself .<br/>
which reminds me. it would be kind of hilarious to go to the WSWS AI and see if you can report wrong answers. I bet they would hate that. but the idea is very funny in my head. god, should I just stop using the duckduckgo AI* entirely and only use the Trotskyist AI. besides the possibility of not using either of them, why not<br/>
(* I primarily do it to pose challenges at AI instead of use it 'unironically', and there is now a disclaimer on the first scrap containing an AI 'contribution' that it 'isn't real information' and will not make it into the final book.)
 
{{li|I=F2/LLM/HAS|Q=618|ww=AIParadoxDignum|pp=10}}AI systems cannot distinguish possible from impossible, therefore they hallucinate  ->  and on this one? I hate to break it to you, but any system of logic does this. Cartesian reasoning does this. mathematics will do this to you if you haven't carefully matched your equations to the way the actual experiments operate. humans are totally capable of AI hallucinations. sometimes the more committed to "reason" they are the more hallucinations come out of them, because "reason" (logic) and Materialism are not the same thing.
 
{{li|I=S2/LLM/HAS|Q=618|ww=AIParadoxDignum|pp=11}}Our need to belong, recognize social cues, and cooperate has shaped the evolution of human intelligence; AI only layers collaborative capabilities onto a framework not inherently designed for social engagement  ->  now you've accidentally implied that autistic people aren't human because neurotypicals are the pinnacle of human evolution.<br/>
I swear I'm not just fishing for dirt, I just mean to literally say that that second part describes my brain {{em|exactly}}. it isn't designed for socialization at all and all of the social features are completely glued on. that statement kind of blindsided me really, because you couldn't have come up with a more fitting insult to throw at me if you were intentionally trying to.
 
{{li|I=S2/ES|Q=618|ww=AIParadoxDignum|pp=12}}Strikes must be presented as pushing for a balanced symbiosis between old things and new things in order to be meaningful  ->  ew. get this Fukuyama dialectic stuff away from me
 


</li></ol>
</li></ol>
<!--
 
== Subjective themes ==
== Subjective themes ==
<ol class="hue clean">
<ol class="hue clean">
</li></ol> -->


== Related ==<!--
{{li|start=y|I=S1/MX|tradition=MX onto LLM, MX onto HAS|Q=618|Q2=618}}tin man (machine learning; existential materialism)  ->  the motif of an inanimate object lacking the characteristics of humans that it would normally need in order to properly carry out morally Right actions {{em|per se}}, which nonetheless manages to create results which are hard to deny as being moral or virtuous.<br/>
the Cowardly Lion has no courage, the wizard awards him a medal. the Scarecrow has no brain, the wizard awards him a diploma. the Tin Man has no heart, he gets awarded basically a heart pillow which does nothing a heart does and has no feelings. [https://en.wikipedia.org/wiki/Tin_Woodman] {{YouTube|u_6S1N5RZrk}} {{YouTube|_iRpd6PgdLI}} the three keep wanting something that supposedly gives them the ability to do something, but turn out to be better at that thing than they think they are while the Wizard can't make them any better at it.<br/>
the Tin Man is a cyborg in the original book actually, made out of an ordinary woodman. but shhh that's not important to the metaphor don't remember it, it's only about whether he gets his award
 
{{li|I=S2/ES|Q=618|Q2=618}}Language is a method for expressing particular underlying ontologies that have been arrived at through other methods  ->  you win, analytic philosophers. I will do anything to no longer have to listen to really bad pro- or anti-AI arguments.
 
</li></ol>
 
== Related ==
<ol class="hue clean">
<ol class="hue clean">


</li></ol>--><!--
{{li|I=S1/ES|Q=618|Q2=618}}our world is on fire (motif)
 
</li></ol>


== Wavebuilder combinations ==
== Wavebuilder combinations ==
<dl class="wikitable hue data_wavebuild three">
<dl class="wikitable hue data_wavebuild three">
{{WaveBuild| -- | -- | -- }} -- en: Along With, Produces  ??  ?? --
{{WaveBuild| {{E:Q618/HAS|Inanimate objects have no courage}} | {{E:Q618/ML|Che Guevara}} | {{E:Q618/ML|Communist parties are inanimate objects with courage}} }}
{{WaveBuild| {{E:Q618/HAS|Inanimate objects have no courage}} | {{E:Q618/ML|Strikes are acts of courage}} | {{E:Q618/ML|Unions are inanimate and courageous}} }}
{{WaveBuild| {{E:Q618/HAS|Inanimate objects have no empathy}} | {{E:Q618/ML|Communist parties are inanimate and courageous}} | {{E:Q618/MX|tin man (existential materialism)}} }}
</dl>
</dl>
{{E:Q618/PT|asdfsdf}} -->


== Ideology codes ==
== Ideology codes ==
Line 33: Line 67:
{{HueNumber|Q83|asdfsdfsdf}}
{{HueNumber|Q83|asdfsdfsdf}}
</ol> -->
</ol> -->
* (none)
* MX / existential materialism
* LLM / machine learning
* HAS / humanities, arts, and social sciences
* HAS onto LLM





Latest revision as of 10:33, 22 April 2026

Main entry

  1. The AI Paradox (Dignum 2026) -> 'paradox' is one of the most clickbait words you could put in a book title. clickbait for libraries.

Motifs or claims

  1. Humans possess moral and ethical discernment -> I think that statement is actually worth doubting. maybe morality is an illusion and we're all just as bad at it as an AI.
  2. Inanimate objects lack the capacity for courage, honesty, and empathy / Inanimate objects have no courage / Inanimate objects have no honesty / Inanimate objects have no empathy
  3. LLMs lack the capacity for courage, honesty, and empathy -> this one is... odd. it really depends on what you define virtues and moral actions to be. like, there are going to be some moments where we could pull out "wizard of oz" definitions and say that even if a particular object "doesn't have a heart" it still produced results that were consistent with courage, honesty, or empathy, and deserves the Wizard's medal or diploma.
    let's shift the discussion over from LLMs to different kinds of inanimate objects, like political parties. a political party as a whole is inanimate; it doesn't have a unique new mind of its own. this is why Marxists speak of political parties as "superstructural" within the "base-to-superstructure" process — as a whole object, a political party is an inanimate object generated out of a group of living beings based partly on their distorted experiences of being in that object, warped by the shape of the object they're making as they're in the process of making it. but although a political party is inanimate, we want it to have good values and make good decisions. if a political party makes good decisions, we consider it to have done a good thing much as we do an individual person. so do you really have to be human to have courage, honesty, or empathy? I genuinely doubt it. if Che Guevara is out there with a band of rebels trying to take back a country for the peasants, the group as a whole is inanimate, but it's hard not to say it has courage.
  4. Humans will always be ahead of pronounced large language models because it always takes human abilities to improve them -> is it just me or does this have really nasty undertones? this statement feels like my tangent about Communist parties totally applies. it feels like the kind of person who would say this would also insist that Communist parties are incapable of designing society because "animate" individuals created them and as an inanimate object they will never rise above the power of the individual. cue a lot of Kantian nonsense about how society strictly comes from many individuals doing the same thing separately in parallel and doesn't 'in fact' come from the existence of individuals forming larger structures or phenomena.
  5. Humans will always be ahead of pronounced large language models because it always takes human abilities to improve them -> there's also like... oh god. there's an implication here that if you have a few founders who created towns and industries and some workers that come work in them afterward but will never create whole industries that the proletariat is inherently worse than the bourgeoisie and the world doesn't need it because humanity is at its best when everyone has to constantly invent whole new business territories. nope, I don't like this proposition at all.
  6. Because AI is created by people, we have power to decide how it's designed and developed / Because AI is created by people, people have power over it -> only on page 5 and we're already tossing out "the weasel we" with a totally vague meaning inside. AI is created by people? what people? are the people who created it ever the same people who are trying to reclaim power over it? if they're different people, could that maybe pose a problem?
  7. pronounced large language models do not possess an ontology of the world -> now that's just incorrect. I've seen enough analytic philosophy to know that they all think ontology can only be done through language and the definitions inside language. (and all philosophy period, if you're Wittgenstein!) so no, if an LLM has a sophisticated enough understanding of language, it has an ontology of the world at least as good as a lot of human analytic philosophers. maybe that's saying something rather negative about the philosophers. but honestly, I'd rather accept that maybe they are right and the AI does contain an imperfect ontology. our world is on fire; we don't really have time to debate about whether ontology "is language" or not. it is. language is second-order logic and logic is langauge. some people are going to call the relationship between ontology and real ecosystems "biosemiotics" because they can't imagine ontology or naïve dialectical interactions not being language. let's move on.
  8. AI cannot reason based on ethical principles -> I'd push back on this one too. it's definitely not that I want AI being used for this purpose, but I have to recognize that if you build a logic engine like the one I've been building here that puts two propositions or concepts together and spits out another one recognized as the best answer, I'm not too sure a sophisticated enough AI couldn't do the same thing. I mean, I came up with this whole concept based on an LLM that was doing it really badly. they might not be able to do it yet, but yeah, I think it would be foolish to assume that I'm definitely better at something that is combining the work of thousands and thousands of people who wrote a corpus and hundreds of AI experts. I'd have to be pretty full of myself .
    which reminds me. it would be kind of hilarious to go to the WSWS AI and see if you can report wrong answers. I bet they would hate that. but the idea is very funny in my head. god, should I just stop using the duckduckgo AI* entirely and only use the Trotskyist AI. besides the possibility of not using either of them, why not
    (* I primarily do it to pose challenges at AI instead of use it 'unironically', and there is now a disclaimer on the first scrap containing an AI 'contribution' that it 'isn't real information' and will not make it into the final book.)
  9. AI systems cannot distinguish possible from impossible, therefore they hallucinate -> and on this one? I hate to break it to you, but any system of logic does this. Cartesian reasoning does this. mathematics will do this to you if you haven't carefully matched your equations to the way the actual experiments operate. humans are totally capable of AI hallucinations. sometimes the more committed to "reason" they are the more hallucinations come out of them, because "reason" (logic) and Materialism are not the same thing.
  10. Our need to belong, recognize social cues, and cooperate has shaped the evolution of human intelligence; AI only layers collaborative capabilities onto a framework not inherently designed for social engagement -> now you've accidentally implied that autistic people aren't human because neurotypicals are the pinnacle of human evolution.
    I swear I'm not just fishing for dirt, I just mean to literally say that that second part describes my brain exactly. it isn't designed for socialization at all and all of the social features are completely glued on. that statement kind of blindsided me really, because you couldn't have come up with a more fitting insult to throw at me if you were intentionally trying to.
  11. Strikes must be presented as pushing for a balanced symbiosis between old things and new things in order to be meaningful -> ew. get this Fukuyama dialectic stuff away from me

Subjective themes

  1. tin man (machine learning; existential materialism) -> the motif of an inanimate object lacking the characteristics of humans that it would normally need in order to properly carry out morally Right actions per se, which nonetheless manages to create results which are hard to deny as being moral or virtuous.
    the Cowardly Lion has no courage, the wizard awards him a medal. the Scarecrow has no brain, the wizard awards him a diploma. the Tin Man has no heart, he gets awarded basically a heart pillow which does nothing a heart does and has no feelings. [1] [2] [3] the three keep wanting something that supposedly gives them the ability to do something, but turn out to be better at that thing than they think they are while the Wizard can't make them any better at it.
    the Tin Man is a cyborg in the original book actually, made out of an ordinary woodman. but shhh that's not important to the metaphor don't remember it, it's only about whether he gets his award
  2. Language is a method for expressing particular underlying ontologies that have been arrived at through other methods -> you win, analytic philosophers. I will do anything to no longer have to listen to really bad pro- or anti-AI arguments.

Related

  1. our world is on fire (motif)

Wavebuilder combinations

Ideology codes

  • MX / existential materialism
  • LLM / machine learning
  • HAS / humanities, arts, and social sciences
  • HAS onto LLM