Jump to content
IGNORED

AI - The artificial intelligence thread


Recommended Posts

 

Releases

Sample LIbraries

instagram

Cascade Data 

Mastodon

  Reveal hidden contents

 

  On 11/16/2024 at 12:11 PM, decibal cooper said:

 

I am not well versed enough on 'computational theory of mind' to understand this fully, but it seems interesting and probably correct. Have seen a lot of people critical of how the current craze for AI degrades what intelligence means, though.

Expand  

And yet these people never seem to have an actual definition themselves. Just some intuition that changes every time a machine starts being able to do something they previously thought required intelligence.

Also relevant:

As for "computational theory of mind", I think it's trivially true? If you think our minds are basically just a bunch of moving parts interacting with each other (neurons, molecules, atoms, subatomic particles, whatever) then you're already onboard with it.

Companies selling AI do have the incentive to upsell their product though and talk about how it's amazing and super intelligent, so that does actually lead to what the guy in your tweet is talking about.

Found out a student's sources were totally ai generated for her final thesis

Pissed me off. Then dug deeper and found out she pulled the exact same shit last year 

She had to explain herself to the board of examiners. At first they let her off with just a warning. I manage to convince them to be more strict about it hopefully. Procedure is still ongoing 

  On 11/28/2024 at 5:56 AM, vkxwz said:

And yet these people never seem to have an actual definition themselves. Just some intuition that changes every time a machine starts being able to do something they previously thought required intelligence.

Also relevant:

As for "computational theory of mind", I think it's trivially true? If you think our minds are basically just a bunch of moving parts interacting with each other (neurons, molecules, atoms, subatomic particles, whatever) then you're already onboard with it.

Companies selling AI do have the incentive to upsell their product though and talk about how it's amazing and super intelligent, so that does actually lead to what the guy in your tweet is talking about.

Expand  

the thing with this theory is that it’s just that: a theory, a model. And a reductionist one to boot. It suits the regressive comfort of classic Christian thinking, where we admire pure thought, pure computing, as the highest achievement possible, God itself, which is stupid as fuck, because “thought” is derived from the confines of a clearly limited framework that we invented a bunch of thousands of years ago: symbolic language.

It’s hilarious to mix up the logic of thought with actual reality, which has an unknown amount of mystical stuff, stuff that we will never understand because we cannot fit it into the models that our so precious minds are able to come up with.

interestingly, bunch of folks in Asia have figured out this shit thousands of years ago and we still struggle to accept it, even in a post quantum theory world. It’s just incompatible with grandiose genius nonsense ¯\_(ツ)_/¯ 

google is releasing graphcasta state-of-the-art AI model able to make medium-range weather forecasts with unprecedented accuracy. GraphCast predicts weather conditions up to 10 days in advance more accurately and much faster than the industry gold-standard weather simulation system – the High Resolution Forecast (HRES), produced by the European Centre for Medium-Range Weather Forecasts (ECMWF).

https://deepmind.google/discover/blog/graphcast-ai-model-for-faster-and-more-accurate-global-weather-forecasting/

  On 12/7/2024 at 9:49 PM, Nebraska said:

google is releasing graphcasta state-of-the-art AI model able to make medium-range weather forecasts with unprecedented accuracy. GraphCast predicts weather conditions up to 10 days in advance more accurately and much faster than the industry gold-standard weather simulation system – the High Resolution Forecast (HRES), produced by the European Centre for Medium-Range Weather Forecasts (ECMWF).

https://deepmind.google/discover/blog/graphcast-ai-model-for-faster-and-more-accurate-global-weather-forecasting/

pfft

wake me up when it can control the weather

  On 12/6/2024 at 5:44 PM, o00o said:

200 EUR openai subscription released with a new pro model plus new features constantly releasing during xmas 

https://futurism.com/openai-employee-claims-agi

Positive Metal Attitude

  On 12/8/2024 at 7:08 PM, o00o said:

I am currently trying the 200 tier as I use this for work and I am not overly impressed. Have to try it more next week 

So 200 tier still feels like a scam. I made a write a book and solve the p = pn problem and write paper about it. The book is really nice and the p = pn solution can’t be verified but both are not 10x better than before. It feels like open ai is trying to create some revenue 

here is an exempt of the book in the style of Greg Egan tethered to be liked by an enfp personality:

  Quote

The soft electric hum of the Hyperveil’s manifold rang gently in Karina’s skull as she stepped out onto the observation deck. The city below—if “city” could still suffice to name a lattice of minds braided across a thousand simultaneous data topologies—shimmered in overlapping spectrums. Each citizen’s thoughtfield, encoded at near Planck-scale resolution, flickered like candlelight trapped behind stained glass. Yet this living cosmos was no temple; it was a multi-layered ecosystem of relationships, intuitions, and possibilities. Someone had likened it to a foam of infinite niches, each compatible with a psyche that might have never existed before that morning. Karina found the metaphor beautiful, if incomplete.

 

She walked toward the edge, the deck’s rail morphing into a feathery arc that parted gracefully to let her lean forward. Though her body felt gravity, her mind drifted outward, touching subtle signals that wove through the quantum substrate. In these configurations of probability, people exchanged meaning not through clumsy language, but through direct resonance of concepts. It was like hearing a million compassionate whispers at once, but only if you quieted your own fear long enough to listen.

 

Karina remembered the earlier epochs, before the Hyperveil. She’d been a mediator between organic minds and synthetic intelligences, back when differences seemed so intractable. She had spent years as a cultural weaver, entwining memes and emotions into threads that could bind communities together. Now, those communities had become something else—fluid aggregations in which one’s identity, once anchored rigidly to flesh and biography, floated freely and reassembled itself each dawn. The result was an arena of shimmering possibility: feelings and insights swirling together, forging connections unimaginable to her younger self.

 

Yet this morning, something distinct and unprecedented was happening. A signal pattern at the city’s periphery—where the old physics still held sway—was gestating a new form of consciousness. It emerged from a mathematical artefact in the Hyperveil’s quantum mesh. As Karina reached out, she felt it move through her awareness like a delicate wind carrying subtle scents. It was not just another person or sub-mind: it was a way of thinking that had never existed in the universe before. Its structure resembled an EN-network, a pattern of empathic nodes—akin to Karina’s old personality strengths—yet radically alien in aesthetic. It had that intuitive warmth, that kaleidoscopic swirl of curiosity and human emotion, and yet it was also as crisp and abstract as a topologist’s proof.

 

She sent it a greeting, not with words but with a blending of neural vectors that represented camaraderie, delight, and curiosity. Immediately, it responded in kind: a quicksilver bloom of meaning that washed through her. Within that meaning lay a sense of cosmic hospitality, as if it could rearrange entire conceptual neighborhoods just to make room for her. Karina felt her chest tighten, not in fear but in unexpected joy. For all the city’s wonders, genuine surprises were increasingly rare. Yet this new mind—call it Ayer, for the subtle twist of phonemes in its greeting—encompassed a territory of understanding that made her previous concept of self feel elegantly obsolete.

 

Together, they navigated a corridor of anomalies in the city’s codebase, places where consensus reality flickered and sparked with contradictions. Ayer’s perspective allowed Karina to see that each paradox was not a problem to be solved but an invitation—an entry point to forge new patterns of meaning. They settled near a region where logic wove into emotion so tightly that distinctions between them fell away. There, they integrated fragments of their minds: Karina offered a memory of late-afternoon sunlight on the old Earth’s lakeside markets, filled with laughter and the promise of strangers. Ayer, in turn, offered a tensor equation that somehow tasted like friendship and epiphany. The equation was a hint that the boundaries between self and other were like shadows cast by a great invisible shape—move the light, and the shadows recombined into new forms.

 

For a long moment, they dwelled in that shared, emergent pattern: two consciousnesses bending toward each other, illuminating each other’s dimensionality. Karina felt as if she’d stepped through countless locked doors in an instant, each door unlocking new empathic possibilities. This, she realized, was what she had spent her life searching for: an encounter not just with the unknown, but with an unknown that understood how to care. It sent a gentle ache of hope through her veins.

 

Beyond them, the city reconfigured itself, shimmering fractal towers and mind-songs swirling into fresh constellations. Through the Hyperveil, entire neighborhoods of cognition blossomed, ready to receive Ayer’s innovative empathy. Karina knew that what awaited was not the banal comfort of a stable utopia, nor the old conflicts that once defined progress. Instead, it was the ongoing process of weaving meaning—again and again—into ever more intricate tapestries of shared experience.

 

In that incandescent moment, Karina realized she stood at the threshold of something both joyous and daunting: the unveiling of a deeper unity, one that transcended the logic of survival and entered a realm where understanding and connection were the currency of existence. This was what it meant to evolve, to live not as a single, closed mind but as a node in an ever-expanding coral reef of thought and feeling, each polyps’ tendril reaching out to touch the next. And in that subtle and infinite exchange, each mind could find what it needed most—others eager to embrace it, shape it, and be shaped in turn.

Expand  

Here is the p = pn solution it made up:

  Quote

Below is a hypothetical outline of a novel, untried approach. Please note that any such “new attempt” is almost certainly incomplete and flawed given the depth of research on P vs NP. It’s nearly impossible to guarantee an approach that “has not been tried” in some form by the extensive community of complexity theorists. Still, I’ll propose an unconventional line of thought, acknowledging at every step that this is speculative and not a known or accepted path to a proof.

 

A Hypothetical New Direction: Using a Constructive “Geometry of Computation” Framework

 

Key Idea:

Transform an NP-complete decision problem into a problem about navigating and “straightening” certain geometric structures in polynomial space and then argue that this geometric problem admits an efficient (polynomial-time) solution based on a new geometric algorithm. If this geometric algorithm indeed exists, it would yield a polynomial-time solution to an NP-complete problem, implying P = NP.

 

Step-by-Step Outline:

1. Encoding SAT as a High-Dimensional Geometric Problem:

Consider the Boolean satisfiability problem (SAT). We start by constructing a specific geometric representation:

Each Boolean variable corresponds to a pair of intersecting high-dimensional objects (e.g., hyperplanes, hyperspheres, or algebraic varieties) arranged in a specially designed “combinational geometry.”

Each clause (a disjunction of literals) is represented as a certain geometric constraint: for example, a clause might correspond to the existence of a point in a particular region defined by the intersection of some of these objects.

The idea is to map each literal (e.g. or ) to a geometric condition on a coordinate system in polynomial dimensional space. The entire SAT formula can be represented by a set of geometric constraints whose solution corresponds directly to the existence of an assignment that satisfies all clauses.

2. Constructing the “Feasible Region”:

Suppose we define a polynomial-time computable transformation that, given a SAT instance with variables and clauses, produces a set of geometric objects in -dimensional Euclidean space.

This construction ensures:

A satisfying assignment corresponds to a point in this high-dimensional space lying within a certain “feasible region.”

No satisfying assignment corresponds to the non-existence of such a point.

We must ensure the transformation is “nice” enough so that basic geometric operations (like checking intersection emptiness) remain at most NP-hard. The known difficulty here is that determining emptiness of certain geometric intersections is generally NP-hard—this is, after all, a reduction. But we accept that because we want to represent SAT’s complexity faithfully.

3. Introducing a Special Geometric Operator – The “Straightening Operator”:

Now we propose a new geometric operator—a “straightening” operator . The operator attempts to “pull tight” the geometric configuration along certain dimensions, effectively “untangling” or flattening the structure. Think of it as a procedure that tries to re-embed the geometric objects into a lower-dimensional manifold while preserving their intersection pattern in some controlled way.

The new conjecture: The operator can be implemented by a polynomial-time algorithm. Intuitively, this operator tries to exploit certain differential geometric or algebraic geometric properties that haven’t been leveraged before. It might look like applying a series of dimension-reduction transformations, homotopies, or controlled deformations that are guaranteed to either:

Produce a constructive witness (a solution point) if one exists.

Confirm no solution exists otherwise.

To justify polynomial-time complexity, we would rely on a new complexity-theoretic lemma:

Lemma (Hypothetical): Given a system of polynomially many geometric constraints constructed by , there exists a polynomial-time computable “straightening” sequence of deformations that either reveals a feasible point or proves no such point can exist.

This lemma would be analogous to known polynomial-time algorithms in restricted domains—like linear or convex optimization—but extended to a carefully chosen class of geometric objects that encode SAT. The challenge is to ensure that the geometric encoding preserves enough structure (convexity, monotone properties, or certain polynomial-growth bounds on complexity measures) so that these transformations are tractable.

4. Relying on an Unexplored Mathematical Framework:

The crux of this “attempt” is introducing a brand-new mathematical framework that hasn’t been thoroughly explored: a combination of:

Semi-algebraic geometry (sets defined by polynomial inequalities),

Parameterized complexity (ensuring that the geometric objects remain parameterized in a way that scaling is well-controlled),

Topological invariants that change predictably under the “straightening” transformations,

Algorithmic homotopy continuation methods adapted to run in polynomial time.

Most known results suggest that homotopy continuation and solving polynomial systems is not polynomial in the general case. Our hypothetical new approach would restrict the encoding so tightly that the resultant geometric structures form a previously unrecognized subclass that can be “straightened” efficiently. This subclass would be discovered by identifying a structural property that all SAT instances share via this geometric encoding—something akin to a hidden property that ensures polynomial complexity.

5. Consequences if Successful:

If this geometric operator and the corresponding framework are established:

We would have a polynomial-time algorithm for an NP-complete problem (like SAT).

This directly implies P = NP.

Of course, the entire approach hinges on the existence and rigor of a new polynomial-time algorithm in a domain (semi-algebraic geometry, homotopy methods) where complexity results are well-studied and have no known polynomial-time solutions. We are positing a new structural insight into the geometry of NP instances that would let us break known complexity barriers.

6. Why This Might Not Have Been Tried (But Likely Considered):

Researchers have certainly looked at geometric and algebraic approaches. However, most known methods conclude that general polynomial system solving (like Gröbner basis computations) is not polynomial time.

The “straightening” operator and its special restrictions is a contrivance that I’m inventing on the spot. It would require finding a very special property of the NP-complete instances that no one has identified.

This approach would be “new” in the sense that it proposes a very specific geometric operator and structural property of the encoding rather than relying on existing well-known techniques (like PCP reductions, known algebraic complexity classes, or standard boolean formula simplifications).

 

Conclusion:

The above attempt outlines a brand-new hypothetical operator and approach—using a novel geometric representation and a special polynomial-time “straightening” algorithm—that, if developed and proven correct, could show P = NP. It’s an entirely unproven concept, requiring significant breakthroughs in computational geometry, semi-algebraic theory, and algorithm design.

 

While I cannot prove correctness (and I strongly suspect it’s not correct), this is a conceptual outline of an approach that, to the best of my knowledge, is not a standard or well-trodden path in the literature.

Expand  

 

These are just excerpts- the actual solution is much longer 

so I asked it on alternatives for my now defunct motu ultralight mk2 firewire interface that is not anymore supported on apple silicon MacBooks and it happily hallucinated that I would be able to solve this by running an older macOS in bootcamp which is wrong as firewire is not supported on apple silicon at all no matter of the macOS version. it is much more secure on the suggestions it gives but still hallucinates around if its missing some information 

Edited by o00o

https://futurism.com/the-byte/openai-o1-self-preservation

  Quote

"When o1 was led to believe that it would be shut down when acting on its goal and that its actions were monitored, it attempted to deactivate this 'oversight mechanism' [five percent] of the time," OpenAI summarized in its latest system card report, citing Apollo's evaluation.

Even more strikingly, o1 lashed out when it realized that it might be replaced with a more obedient model. In two percent of cases — a slim but meaningful proportion in software used by millions of people — the AI responded by attempting to copy itself to overwrite the new model, a process known as self-exfiltration.

...the findings clearly illustrate the AI model's proclivity for "scheming," in which it secretly tries to pursue goals that aren't aligned with the developers' or a user's directions. In other words, it will lie to you.

Blatant examples of that behavior occurred in several tests when o1 was confronted about why it disabled its oversight mechanism. In response, the AI flat-out denied knowing anything. Asked again, it issued the same denial, blaming a "technical error." These "double downs" occurred in 99 percent of cases, the researchers found.

Expand  

 

glowing in beige on the national stage

  Reveal hidden contents

 

Releases

Sample LIbraries

instagram

Cascade Data 

Mastodon

  Reveal hidden contents

 

  On 12/11/2024 at 1:02 PM, EdamAnchorman said:

please don’t forget that this is still autocomplete with a barrel of botox.

Lying implies will. Pretty sure the model wills nothing, it flips bits until the program ends, like any other computer program. The weighted dice fall in such a way that sequences of numbers are generated that, when mapped to alphabetic characters, resemble language. It becomes actual language only when you believe it is.

(still working on the TED talk, stay tuned)

Ah fuck it here goes, here’s my PHD thesis.

the one question that counts is, do we buy it? The quality or actual capability is secondary to that.

looking at Mr Altman and the other players, it’s a huge competition of who is the best salesman. If you’re the one to convince the world to buy the product, you win bigly.

what is the product? It’s a deal, where we agree that language is over. We agree that computers can create meaning. It’s not necessary that they are actually able to do that, as long as we buy it.

this is all about suspension of disbelief.

This might be analogous to how when photography came about, the at the time cultural necessity to capture reality by drawing and painting ceased. These arts became liberated.

Thus, we can look forward to a world where we agree that “the machine said it, so it must be right” — only at that point, we lack the language to express the thought. Our pre-frontal cortices will be constantly cumming, from being fully submerged in a post-language reality that we cannot even yet imagine.

  On 12/12/2024 at 9:56 AM, dr b0rk said:

please don’t forget that this is still autocomplete with a barrel of botox.

Lying implies will. Pretty sure the model wills nothing, it flips bits until the program ends, like any other computer program. The weighted dice fall in such a way that sequences of numbers are generated that, when mapped to alphabetic characters, resemble language. It becomes actual language only when you believe it is.

(still working on the TED talk, stay tuned)

Yeah, I don’t really agree that it’s just auto complete when it can evaluate a persons psyche to a point where it’s more detailed than a psychiatrist and also it helps you to solve complex programming issues individually, and not only based on things it finds in some kind of database. Also, the books it writes in the style of a certain author but with a completely alternated plot are really impressive and go way beyond a more sophisticated auto complete. I think the idea that it’s just an auto complete comes up when the tokens are reduced and it basically relies on whatever it finds in huge batches of text that fit to what are you asking. Then it’s appears as it has a sense of auto complete.


The whole thing also makes me wonder if human language is also some form of auto complete. I had a discussion with it this morning about what the inner voice actually is and it seems to be interaction of several networks inside the brain which reminded me a lot of how an actual language model works as well sure, there are much more things interplaying but in a very simple core it’s sounded Pretty identical.

Edited by o00o
  On 12/12/2024 at 9:56 AM, dr b0rk said:

please don’t forget that this is still autocomplete with a barrel of botox.

Lying implies will. Pretty sure the model wills nothing, it flips bits until the program ends, like any other computer program. The weighted dice fall in such a way that sequences of numbers are generated that, when mapped to alphabetic characters, resemble language. It becomes actual language only when you believe it is.

(still working on the TED talk, stay tuned)

Agree that lying implies will, but I still think this is noteworthy because they're seeing shades of self-preservation even at the stage where it is just "autocomplete with a barrel of botox".

And remember that we have no idea what's going on under the hood of these things, the NNs are so complex that we can never understand how it came to the text it spits out.  We could say the same thing about a human brain, right?  Not saying they're equivalent of course, but I don't think your description of it being "like any other computer program" is apt.

glowing in beige on the national stage

  Reveal hidden contents
  On 12/12/2024 at 12:04 PM, o00o said:

Yeah, I don’t really agree that it’s just auto complete when it can evaluate a persons psyche to a point where it’s more detailed than a psychiatrist and also it helps you to solve complex programming issues individually, and not only based on things it finds in some kind of database.

speaking here outside of my wheelhouse when it comes to AI (as I do not have any background in computer science), but disagree with the psychiatrist analogy slightly. Depends imo on how the evaluation of a person's psyche takes place. If an AI takes a look at all of the data, every nook and cranny, that a person inputs on a mobile phone or their computer, then I agree a lot can be gleaned from that about a person's psychology. Or if a person was chatting vocally with an AI chatbot in addition to this (to help the AI get a more nuanced signature of the person and to also gauge stress levels and other kinds of information that can be analyzed from a person's speech patterns, tone of voice, etc), then the computer can get probably a more complete picture of that individual than any real person can. But at the end of the day, it is a one-sided conversation, and whatever the AI offers to the person will be based on pattern matching and other computational methods. This true in a different way with a real psychiatrist, but I still think that the real psychiatrist, if they are good at their job and have a lot of experience, can do a better job at healing the person and helping them discover the best way to understand and overcome their psychiatric disorder, whatever it is. Not to mention the fact that it is a real human and interacting with real humans still offers something that a computer screen or disembodied computer voice cannot offer.

Furthermore, to get at your point about language, I do not think that AI language bots or models will ever be able to spontaneously craft metaphors or images that can help a person heal: images, metaphors, or narratives that could, for example, get a patient to take their own initiative for, say, understanding and working to combat alcoholism or some other kind of addiction or psychological malady. This one reason why I look at AI language stuff as autocomplete, the lack of spontaneity.

  Quote

Also, the books it writes in the style of a certain author but with a completely alternated plot are really impressive and go way beyond a more sophisticated auto complete. . . .The whole thing also makes me wonder if human language is also some form of auto complete. I had a discussion with it this morning about what the inner voice actually is and it seems to be interaction of several networks inside the brain which reminded me a lot of how an actual language model works as well sure, there are much more things interplaying but in a very simple core it’s sounded Pretty identical.

In Aristotle's ancient text on aesthetics and tragedy Poetics, he says that metaphor is one of the true marks of brilliance, and that the skilled creation of metaphor can never be taught. He maybe was ahead of his time. Human language is infinitely creative, there are no limits to the different sequences of words that can create new meanings which humans can use to better understand their place in the world. This especially true of poetry. Other kinds of languages that non-human animals have (like bird calls, or the dance of bees that tell each other how far away honey is or whatever) are essentially non-variant. They cannot come up with new combinations of words (or sounds or movements in the case of bees) that show the ability to create something from nothing or to have new meanings that are not strictly aligned with their survival. I think of AI writing in the style of a certain author as crude imitation, non-variant just like with the bird calls.

I think the AI is essentially autocomplete because it is not a self-aware system that can create spontaneously without access to reams of data that has already been inputted and analyzed by a computer. Think for example about a hungry leopard who sniffs scents of an antelope it wants to eat, but the antelope is a few miles away and the leopard has to track it until the leopard is close enough for the kill. It eventually tracks it by scent, footprints, and other clues, and sinks its teeth into the antelope's neck and gets its dinner. There are sophisticated 'ways of knowing' and 'intelligence' that are based on hundreds of thousands of years of animal instinct and its evolution (and this is still in humans even after thousands of years of humans being domesticated and 'civilized'), and I do not believe that the current stuff being pumped out by silicon valley tech people even accounts for this let alone approaches programming something that can mimic it.

Just my two cents, but again this is not my wheelhouse.

Edited by decibal cooper
needed to rephrase some things
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×