Jump to content
IGNORED

AI - The artificial intelligence thread


Recommended Posts

Maybe I didn't articulate my point well enough. All I'm saying is that the physics, whatever they are, that govern our brains are set and are the same physics that govern computer processors AS FAR AS WE KNOW (not saying they're even on the same level, but that they play by the same rules of physics).

Sure we don't understand consciousness yet but it's probably more likely that we don't understand it because we can't yet understand the complexity of the brain as opposed to missing something in the physics. 

We have a pretty good idea of physics even down to the quantum level. Sure we don't understand 100% yet but we understand much less about complex systems of neurons and such, which is why I believe it's more likely we're missing something there than in the physics.

glowing in beige on the national stage

  Reveal hidden contents

w25mbu3ztcfe1.jpeg?width=640&auto=webp&s

Releases

Sample LIbraries

instagram

Cascade Data 

Mastodon

  Reveal hidden contents

 

  On 1/26/2025 at 4:46 PM, decibal cooper said:

I’ve pointed out to some of my mathematical friends that the unconscious appears to be better at math than they are. My friend George Zweig calls this the Night Shift. Bear in mind that the unconscious has no pencil or notepad and certainly no eraser. That it does solve problems in mathematics is indisputable. How does it go about it? When I’ve suggested to my friends that it may well do it without using numbers, most of them thought—after a while—that this was a possibility. How, we dont know. Just as we dont know how it is that we manage to talk. If I am talking to you then I can hardly be crafting at the same time the sentences that are to follow what I am now saying. I am totally occupied in talking to you. Nor can some part of my mind be assembling these sentences and then saying them to me so that I can repeat them. Aside from the fact that I am busy this would be to evoke an endless regress. The truth is that there is a process here to which we have no access. It is a mystery opaque to total blackness.

Expand  

Yes, I think there has been studies that showed that professional mathematicians actually use a lot subconscious parts of spatial reasoning when doing mathematics. They see the mathematical objects like spatial entities and do the reasoning in the subconscious which then shows itself as a intuitive understanding of mathematics. For example their intuition tells if some potential theorem is true or false, and then they "just" need to formally prove or disprove it, bringing the problem to the realm of symbols. Also a lot of them have synesthesia, or similar ways of "seeing" mathematics.

I'm no mathematical genius, far from it, but I have some kind of mild number-color synesthesia and I used to have number-shape synesthesia, but that has disappeared for some reason. I don't know if it has helped me in any other way except that remember numbers is easier when I think of them as colors. But when I had to do a bunch of math classes at the uni and work on the problems, I could usually switch into sort of automation, where I didn't have to actively think about what I was doing. But this only applied to relatively easy problems where I could just piece together the answer from previous problems. If the problem was completely new to me, and there were no examples to follow, I had to enter a very focused state that sometimes took a while to reach and then get out of, like I felt I had gone a bit crazy and acted weird for awhile.

electro mini-album Megacity Rainfall
"cacas in igne, heus"  - Emperor Nero, AD 64

  On 1/26/2025 at 6:20 PM, ignatius said:

w25mbu3ztcfe1.jpeg?width=640&auto=webp&s

Expand  

timestamped where she starts talking about Ai and surveillance.  if you're wondering about what some people are thinking when it comes to all this bullshit and what some of these tech billionaires have in mind regardless of how possible or impossible it is. it's easy to see it as tinfoil hat conspiracy type shit but who the fuck knows what these weirdos are thinking about doing.. especially if crypto goes heavy and becomes a common place currency for average people. 

this one also timestamped to surveillance/ai shit... easy to see how people go down conspiracy rabbit holes w/all this shit but generally i think good to be aware of what's happening w/this tech and these companies. 

 

Releases

Sample LIbraries

instagram

Cascade Data 

Mastodon

  Reveal hidden contents

 

  On 1/25/2025 at 8:18 PM, decibal cooper said:

I hope this visionary technology can help my smart AI-assisted washing/drying machine that I use for laundry. It says that it uses AI to detect the right amount of energy to dry my clothes and not waste energy. Trouble is that the shit always comes out damp when the cycle is over. Smart technology my ass! Why can't you completely wash and dry my clothes like normal you AI cunts! Always with the dampness!

And when washing I put the right amount of detergent in there, and my clothes do not look soapy or sudsy enough during the cycle. I can see the clothes in there when it running because the door is clear plastic, and it definitely not sudsy enough in there, not at all. This goddamn AI can't even produce the proper amount of fucking suds!

 

Expand  
  Quote

It sounds like you’ve had an aggravating experience, and it’s totally understandable to feel frustrated when a supposedly “smart” washer/dryer underperforms. Let’s break down both of the issues you mentioned—clothes coming out damp and not enough visible suds—and explore some possible explanations and fixes:

---

## 1. Clothes Coming Out Damp

1. **Sensor or Moisture Detection Issue:**  
   Many modern washer/dryer combos rely on built-in sensors to detect the moisture level in your clothes. If the sensor is dirty or malfunctioning, it might think clothes are dry even when they’re still damp.  
   - **Check your manual** for how to clean or recalibrate the sensor. Some machines require you to wipe down the metal contacts (often located in or near the drum) with a soft cloth.

2. **Dryer Vent or Filter Blockage** (if you have a vented machine):  
   If it’s vented, a blocked or kinked exhaust hose can lead to insufficient airflow, leaving clothes damp.  
   - **Check the vent hose** to ensure there are no blockages, kinks, or excessive lint buildup.  
   - Clean the lint filter thoroughly after every load.

3. **Load Size & Fabric Types:**  
   Loading bulky comforters, mixed with lighter items, can confuse moisture sensors or prevent even airflow.  
   - Try drying bulky items separately or in smaller loads.  
   - Use the correct cycle for the fabric type (cotton, synthetic, etc.).

4. **Spin Speed in Wash Cycle:**  
   If clothes aren’t spun efficiently before drying, they may still be too wet when the drying phase starts.  
   - **Increase the spin speed** setting (if your machine allows) so clothes come out drier after the wash cycle.

5. **Cycle Selection and “Eco” Modes:**  
   Eco or energy-saving modes may use lower temperatures or shorter drying times.  
   - Make sure the cycle you select is designed for thorough drying, not just an eco-friendly or quick-dry cycle.

---

## 2. Not Enough Visible Suds

1. **Detergent Type and Concentration:**  
   High-efficiency (HE) washers are designed to use less water and detergent, which often means fewer visible suds—this is normal for many machines these days. If you’re using HE detergent, it’s formulated to produce less foam.  
   - Double-check you’re using the right type of detergent for your machine (HE detergent if required).  
   - If you’re already using the right type, **consider slightly adjusting the amount** you’re putting in. But be cautious—too much detergent can leave residue or trigger extra rinse cycles.

2. **Load Size and Soaking:**  
   If you’re washing a large load, sometimes the water and detergent can appear not to “suds up” as much, especially if the cycle starts with an efficient spray or tumbling action.  
   - Try a smaller load to see if you notice more suds.

3. **Machine’s Automatic Dispensers or Pre-measure Systems:**  
   Some machines have compartments or sensors that automatically adjust detergent flow. If the dispenser is clogged or not functioning, it might be dispensing less detergent than needed.  
   - Check and clean the detergent drawer or dispenser area regularly. Residue buildup can prevent the correct amount of detergent from mixing in.

4. **Hard Water vs. Soft Water:**  
   In areas with very hard water, you might see fewer suds because minerals interfere with soap’s foaming action.  
   - If you suspect hard water, you could try a water softener additive or a specialized detergent to see if it improves sudsing.

---

## Additional Pointers

- **Run a Maintenance Cycle:** Most machines have a self-cleaning or maintenance cycle. Running this monthly (or as recommended) can clear out residue, improve sensor accuracy, and ensure everything is working optimally.  
- **Check for Software Updates (Yes, Really):** Some “smart” appliances allow firmware updates. While it’s a long shot, a software update could fix certain glitches or sensor calibration issues.  
- **Contact the Manufacturer or a Technician:** If you’ve tried all the usual troubleshooting steps—cleaning sensors, verifying load sizes, etc.—and still get consistently damp laundry, it might be a defect or malfunction. At that point, it’s worth contacting customer support or a local repair technician.

---

I get that it’s maddening when technology that’s supposed to make life easier ends up complicating your routine. If the above steps don’t bring your clothes back to the dryness and “proper suds” you’re used to, don’t hesitate to reach out to the manufacturer or a professional. Hopefully with a bit of troubleshooting and maybe a tweak or two, your washer/dryer will start behaving more like the “smart” machine it’s advertised to be. Good luck!

Expand  

 

  On 1/26/2025 at 10:46 AM, dr b0rk said:

wondering if this is even possible…?

like, clearly the entirety of programmable machines is based on symbolic processing. Everything I type into a code editor is limited by what can be expressed in these terms. What can be computed by my mind using rational thought.

Even if it’s a program that uses large amounts of data, or some model based on physics or some super abstract maths, maybe runs on a huge GPU cluster or whatever, it doesn’t matter.. the model itself swims in an aquarium defined by semantics and cannot get out of the glass boundaries. Or can it? I’m not sure but intuition says nope.

So, I’m not sure non-semantic computer processing is possible, as opposed to reality, where we as humans with a lot of unknowns and unknowables can actually transcend thought, aka states of ego death and such ☯️

Expand  

I get comparing humans with ML and symbol/token processing but from what I know most models now are built on ChatGPT's foundation of solving a reply by going through every word before it and generating the most likely response. Which granted it's really freaking good at.  I spent a couple days playing around with ChatGPT until I started asking it actually interesting questions only to be given a regurgitated opinion cobbled together from texts from the internet. I want my chatbot to actually be able to reason, to understand a subject. Not to semantically represent the overlaying nature of a subject but to actually generate new reasons to argue.  Without being biased by the dataset but by logic and say, morals, which by itself is mindnumbing to even imagine a computer being able to do, at least in a capacity that is not trained on written text. Does this make sense? I'm babbling. Getting out of the semantic aquarium would in my opinion require a model that represents our biological mechanisms of learning and shaping us as a person, instead of being given a million books to interpret how we operate without question. Which would I imagine require an insane amount of processing power plus the foundation of actually being able to contextualize our most abstract part of reality, consciousness, into code, or more algorithms that supply the factor of growth :mu-ziq:

  On 1/27/2025 at 4:37 PM, Rubin Farr said:

aye, here’s a good one from Ars about it as well

  Quote

On LinkedIn, Meta Chief AI Scientist Yann LeCun, who frequently champions open-weights AI models and open source AI research, wrote, "To people who see the performance of DeepSeek and think: 'China is surpassing the US in AI.' You are reading this wrong. The correct reading is: 'Open source models are surpassing proprietary ones.'

https://arstechnica.com/ai/2025/01/deepseek-spooks-american-tech-industry-as-it-tops-the-apple-app-store/

 

https://github.com/deepseek-ai/Janus/blob/main/janus_pro_tech_report.pdf

deepseek now  also released an image generator 

I still stand with my claim that r1 is not as good as o1 pro when promted with more exotic prompts

Edited by o00o
  On 1/26/2025 at 6:05 PM, EdamAnchorman said:

Maybe I didn't articulate my point well enough. All I'm saying is that the physics, whatever they are, that govern our brains are set and are the same physics that govern computer processors AS FAR AS WE KNOW (not saying they're even on the same level, but that they play by the same rules of physics).

Sure we don't understand consciousness yet but it's probably more likely that we don't understand it because we can't yet understand the complexity of the brain as opposed to missing something in the physics. 

We have a pretty good idea of physics even down to the quantum level. Sure we don't understand 100% yet but we understand much less about complex systems of neurons and such, which is why I believe it's more likely we're missing something there than in the physics.

Expand  

Have you heard of Panpsychism? I think it's an interesting attempt to explain consciousness:

https://en.wikipedia.org/wiki/Panpsychism

  On 1/28/2025 at 12:07 AM, kuniklo said:

Have you heard of Panpsychism? I think it's an interesting attempt to explain consciousness:

https://en.wikipedia.org/wiki/Panpsychism

That's interesting, thanks for that.  I am certainly no expert, and I'm a Ph.D chemical engineer, which means my brain seeks logic and the most simple answer.  My problem with Panpsychism is I guess that we don't even know what consciousness is, and it just seems lazy and untestable to say that it's just bundled with existence.  I won't discount it because anything is possible, but in my naive view, we need to understand what consciousness is before we try to understand where it comes from. But if we followed that, we wouldn't get anything done ever. 🙂

glowing in beige on the national stage

  Reveal hidden contents
  On 1/28/2025 at 1:08 AM, EdamAnchorman said:

That's interesting, thanks for that.  I am certainly no expert, and I'm a Ph.D chemical engineer, which means my brain seeks logic and the most simple answer.  My problem with Panpsychism is I guess that we don't even know what consciousness is, and it just seems lazy and untestable to say that it's just bundled with existence.  I won't discount it because anything is possible, but in my naive view, we need to understand what consciousness is before we try to understand where it comes from. But if we followed that, we wouldn't get anything done ever. 🙂

My background is in science and engineering too so I'm also inclined to distrust any theory that can't be tested. But trying to explain consciousness strictly in mechanical terms also seems to run into a lot of difficulties. Maybe we can't truly understand what consciousness is until we understand where it comes from?

I'm also skeptical that this current generation of AI will achieve something like human consciousness. We might get there eventually but not just by building bigger LLMs.

You might be interested in Integrated Information Theory which formalises some ideas on consciousness using more than just vibes but can get a bit panpsychy. Global workspace (broadcasting) theory is also interesting. Both hoping to provide a more scientific perspective to the subject. Good subjects to prompt LLMs with too, to get overviews at least

  On 1/28/2025 at 2:56 AM, picklish2 said:

You might be interested in Integrated Information Theory which formalises some ideas on consciousness using more than just vibes but can get a bit panpsychy. Global workspace (broadcasting) theory is also interesting. Both hoping to provide a more scientific perspective to the subject. Good subjects to prompt LLMs with too, to get overviews at least

Any good starting points for reading about it?

  On 1/28/2025 at 7:04 AM, kuniklo said:

Any good starting points for reading about it?

probably these two

https://iep.utm.edu/integrated-information-theory-of-consciousness/#H1 - basically the first part and the objections

https://www.iit.wiki/ - for the deeper dive into the maths and papers (paywalled but SciHub exists)

As I understand it, they are talking about consciousness as a spectrum, as a scale - humans have more of it than ants. And that what consciousness is, is actually a type of information. And information, according to them, is fundamental. But this leads to weird outcomes

I think this is what AI can be very useful for : selecting text from scientific papers and asking the AI questions about parts that aren't clear to you, or full of terminology. How accurate the answers you get are is of course uncertain, but it feels like its a way to get closer to accessing high-level thinking in scientific fields that aren't your own (mine is sleep research, so some of the IIT people go to my conferences).

Obviously the more background in a subject you have, the easier it is to prompt the AI with detailed questions.

imho it’s more informative to read some Alan Watts tbqh.

I recommend “The Book”.

or maybe Stanislaw Lem’s “Solaris”.

this dumb metaphor of if you have a hammer, everything looks like a nail.

if the tool that we use to get to the core of reality, consciousness etc, is language, then obv everything will look like a subject or object. Using language to stick labels onto stuff, stuck inside the box. Language just can’t get outside of its own constraints.

a sort of futile attempt of conquering reality by furiously hammering away at it.

whereas there’s different approaches that are accessible by the human mind as well.. spiritual ones, where no questions remain really. basically, direct experience. But for the part of a mind which is language-driven, this letting-go is comically unacceptable.

what would happen if science would be able to accept that rationality is a deeply flawed invention?

  On 1/28/2025 at 1:53 PM, dr b0rk said:

what would happen if science would be able to accept that rationality is a deeply flawed invention?

So convenient a thing it is to be a reasonable creature,...

One of my favorite Ben Franklin quotes about rationality/reason, brilliant irony considering that he was the leading scientist and luminary of the so-called Age of Reason or enlightenment.

Edited by decibal cooper
pic formatting

More than anything it should be more a wakeup call to those that currently use LLMs for anything more than an entertainment tool: The LLM only responds to whatever is in its data set - if such data is only collected to comply with the country policies that it's produced in (as is the above) then that's all you'll get from it. It's llike if the model set was Encarta 95 and you now tried to ask it about the commercial advantages of working at the World Trade Center

I haven't eaten a Wagon Wheel since 07/11/07... ilovecubus.co.uk - 25ml of mp3 taken twice daily.

  On 1/28/2025 at 1:00 PM, picklish2 said:

I think this is what AI can be very useful for : selecting text from scientific papers and asking the AI questions about parts that aren't clear to you, or full of terminology. How accurate the answers you get are is of course uncertain, but it feels like its a way to get closer to accessing high-level thinking in scientific fields that aren't your own (mine is sleep research, so some of the IIT people go to my conferences).

I agree it can be useful for things like this but if you don't already know the subject matter how can you be sure it's giving you an accurate interpretation? These things are just probability machines at heart. I use them mainly for coding and even there where it's a lot easier to identify a "correct" answer they get things wrong and make mistakes all the time. Even the best models.

  On 1/27/2025 at 4:25 PM, chronical said:

 I want my chatbot to actually be able to reason, to understand a subject. Not to semantically represent the overlaying nature of a subject but to actually generate new reasons to argue.  Without being biased by the dataset but by logic and say, morals, which by itself is mindnumbing to even imagine a computer being able to do, at least in a capacity that is not trained on written text. Does this make sense? I'm babbling. Getting out of the semantic aquarium would in my opinion require a model that represents our biological mechanisms of learning and shaping us as a person

Can you expand a bit on what you mean by understanding, reasoning and semantic aquarium?

  On 1/28/2025 at 12:07 AM, kuniklo said:

Have you heard of Panpsychism? I think it's an interesting attempt to explain consciousness:

https://en.wikipedia.org/wiki/Panpsychism

To the panpsychists here, are you conscious when you are knocked unconscious? I think this perspective comes from finding it implausible that experience could somehow emerge out of matter, and so it must be a fundamental aspect of it by default. I think the issue is that you've now defined it in a way where it no longer maps onto anything useful or any of what we actually mean by consciousness.

  On 1/28/2025 at 2:56 AM, picklish2 said:

You might be interested in Integrated Information Theory which formalises some ideas on consciousness using more than just vibes but can get a bit panpsychy. Global workspace (broadcasting) theory is also interesting. Both hoping to provide a more scientific perspective to the subject. Good subjects to prompt LLMs with too, to get overviews at least

To the IIT enjoyers here, do you really believe you can get consciousness out of any system with sufficient phi? it feels very unsatisfying, just an arbitrary measurement that came from observing that the systems we do think are conscious have a high level of that measurement, but again it leads to strange predictions and redefines consciousness as something that doesn't map onto what we use the word to describe in daily life.

Functionalism is much more convincing for me personally. I believe a China brain would be conscious, and that the "p zombie" is not a coherent idea.

  On 1/28/2025 at 1:53 PM, dr b0rk said:

imho it’s more informative to read some Alan Watts tbqh.

I recommend “The Book”.

or maybe Stanislaw Lem’s “Solaris”.

this dumb metaphor of if you have a hammer, everything looks like a nail.

if the tool that we use to get to the core of reality, consciousness etc, is language, then obv everything will look like a subject or object. Using language to stick labels onto stuff, stuck inside the box. Language just can’t get outside of its own constraints.

a sort of futile attempt of conquering reality by furiously hammering away at it.

whereas there’s different approaches that are accessible by the human mind as well.. spiritual ones, where no questions remain really. basically, direct experience. But for the part of a mind which is language-driven, this letting-go is comically unacceptable.

what would happen if science would be able to accept that rationality is a deeply flawed invention?

Expand  

I find this sort of logic endlessly frustrating. maybe I just haven't groked it but it just looks like a bunch on unfalsafiable self reinforcing claims that allows people to imply their superiority over others and position themselves as always ahead of any real work being done. A whole lot of "just trust me bro" and "you wouldn't understand" with no measurable results or explanations, this should seem suspicious to anybody, it's guru logic. Solaris is great though. and you may resonate with this.

Edited by vkxwz

^consciousness is not magic. it (in any number of layers of complexity) is a consequence of evolution, not dissimilar to a working circulatory system. i've said plenty on this ITT and others tho so i'll shut up on that shit

anyway came here to post this: 

  Quote

Altman getting called out for being shortsighted/blinded by the game of just pushing for more more more cash despite any legit need, and his "AGI and beyond!" is a clear notice that any time OpenAI or others use the term AGI now it's meaningless tech-hype marketing going forward. it's still a meaningful thing maybe but if Altman or whoever else says it? yeah, just discount it immediately. (and well, likely most anything else they say. their words are all corrupted by the drive for cash & profit, obv)

Screenshot2025-01-28at9_48_11PM.thumb.png.4f08decbbe680339724d01a5d9e8ed24.png

  On 1/29/2025 at 4:15 AM, vkxwz said:

I find this sort of logic endlessly frustrating. maybe I just haven't groked it but it just looks like a bunch on unfalsafiable self reinforcing claims that allows people to imply their superiority over others and position themselves as always ahead of any real work being done. A whole lot of "just trust me bro" and "you wouldn't understand" with no measurable results or explanations, this should seem suspicious to anybody, it's guru logic. Solaris is great though. and you may resonate with this.

thank you, I did enjoy the text.

Understandable that you find this totally frustrating. But it is not logic.

there really is no logic to this. It’s much more an acceptance that logic is an invention. Logic seems to go together with language. Not sure what comes first tbh, but they seem to require each other.

So, it’s really truly extremely simple: Is there “stuff” that’s not containable with logic? Stuff that cannot be conquered by building a logical scaffolding around it? If the answer you come to is “no, nothing can remain unexplained”, if it cannot be that reality also has an unknown amount of constituents that cannot be conceptualised, or discovered by thinking really hard, then we’re stuck with a frustratingly incomplete description of reality until we solve “everything”. And the closer we look at literally anything, even a damn grain of sand, the bigger the numbers get, the more this system of logic becomes really weird and wants to collapse…

so this is like the getting out of the car in the text. It’s impossible to logic your way out of logic. The gurus do it with poetry.. I get that this can be repulsive…

Tbh my concept of the world was like this until a big ol’ psychedelic drug experience changed it for good.

I was flipping Ae’s LP5 vinyl from side A1 to B1 while on a large dose of Ketamine, and experienced something that first felt like a Deja Vu lightning bolt to the spine, then became a feeling of “omfg everyone is experiencing this same Deja Vu right now too”, then quickly crescendoed into full on singularity (without computers). It was beyond words and certainly beyond self. It was immediately clear that there’s no point in “understanding it”, because it was itself some deep merging of embodied understanding and being. I don’t know how to recreate it. The experience faded away, but I can say with conviction that the moment became pivotal for my life.

it’s just so much funnier taking things with a grain of salt.

Edited by dr b0rk
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×