The creator has an estimated net worth of $50 million to $200 million prior to Open AI hiring him. If you listen to any interviews with him, doesn't really seem like the type of person who's driven by money and I get the impression that no matter what OpenAI is paying him, his life will remain pretty much unchanged (from a financial perspective at least).
He also still talks very fondly about Claude Code and openly admits it's better at a lot of things, but he thinks Codex fits his development workflow better.
I really, really don't think there's a conspiracy around the Codex thing like you're implying. I know plenty of devs who don't work for OpenAI who prefer Codex ever since 5.2 was released and if you read up a little on Peter Steinberger he really doesn't seem like the type of person who would be saying things like that if he didn't believe them. Don't get me wrong, I'm not fan boy-ing him. He seems like a really quirky dude and I disagree with a ton of his opinions, but I just really don't get the impression that he's driven by money, especially now that he already had more than he could spend in a lifetime.
I didn't say he didn't care about money, I just don't think that's his main driver, especially since he's already set for life. He spent 10 years building a company around a genuinely valuable product that just about everyone was using and, yeah, it made him rich.
I think "I'm going to keep the money I made from the company I spent 10 years building" and "I'm not going to lie about the coding tools to try and court a deal with OpenAI" aren't contradictory values. If anything, after hearing him talk for a while, I think it's way more believable that he switched from CC to Codex because Anthropic sent lawyers after him over the ClawdBot name than because of an OpenAI deal.
Having a few hundred thousand doesn't make you greedy, it makes you fortunate.
Having a hundred times that does make you greedy. You had more than enough long before getting to that point. You could have been content with less, so the only reason to try to extract more out of others is greed.
I've reached a similar conclusion, though not by targetting technology specifically. Rather, I got into the habit of asking myself "Does X enhance my life in some way?"
It's interesting what this simple question can uncover.
How do you need to supervise this "less" than an LLM that you can feed input to and get output back from? What does it mean that it's "running continuously"? Isn't it just waiting for input from different sources and responding to it?
As the person you're replying to feels, I just don't understand. All the descriptions are just random cool sounding words/phrases strung together but none of it actually providing any concrete detail of what it actually is.
I’m sure there are other ways of doing what I’m doing, but openclaw was the first “package it up and have it make sense” project that captured my imagination enough to begin playing with AI beyond simple copy/paste stuff from chatGPT.
One example from last night:
I have openclaw running on a mostly sandboxed NUC on my lab/IoT network at home.
While at dinner someone mentioned I should change my holiday light WLED pattern to St Patrick’s day vs Valentine’s Day.
I just told openclaw (via a chat channel) the wled controller hostname, and to propose some appropriately themes for the holiday, investigate the API, and go ahead and implement the chosen theme plus set it as the active sundown profile.
I came back home to my lights displaying a well chosen pattern I’d never have come up with outside hours of tinkering, and everything configured appropriately.
Went from a chore/task that would have taken me a couple hours of a weekend or evening to something that took 5 minutes or less.
All it was doing was calling out to Codex for this, but it acting as a gateway/mediator/relay for both the access channel part plus tooling/skills/access is the “killer app” part for me.
I also worked with it to come up with a promox VE API skill and it’s now repeatable able to spin up VMS with my normalized defaults including brand new cloud init images of Linux flavors I’ve never configured on that hypervisor before. A chore I hate doing so now I can iterate in my lab much faster. Also is very helpful spinning up dev environments of various software to mess with on those vms after creation.
I haven’t really had it be very useful as a typical “personal assistant” both due to lack of time investment and running against its (lack of) security model for giving it access to comms - but as a “junior sysadmin” it’s becoming quite capable.
Great story. And it distills what the claw stuff is all about, in terms of utility is actually here. It's the multitude of "channels", out of the box, that you can enable that allow you to speak with the actual AI agent with access to the configured environment.
Yeah, and if you give another human access to all your private information and accounts, they need lots of supervision, too; history is replete with examples demonstrating this.
But there's typically plenty at stake for the recipient. If my accountant tried to use my financial information in some improper way, he'd better have a good plan for what comes next.
I don't have one going but I do get the appeal. One example might be that it is prompted behind the scenes every time an email comes in and it sorts it, unsubscribes from spam, other tedious stuff you have to do now that is annoying but necessary. Well that is something running in the background, not necessarily continuously in the sense that it's going every second, but could be invoked at any point in time on an incoming email. That particular use case wouldn't sit well with me with today's LLMs, but if we got to a point where I could trust one to handle this task without screwing up then I'd be on board.
what are you guys running constantly? no seriously i havent run a single task in the world of LLMs yet for more than 5 mins, what are you guys running 24x7? mind elaborating?
The key idea is not running constantly, but being always on, and being able to react to external events, not just your chat input. So you can set a claw up to do something every time you get a call.
They're creating blogposts that try to character assassinate OSS maintainers that refuse the AI slop PRs in their repos. Next up I assume it'll be some form of mass scam, probably a crypto scam of some sort, yknow that kinda good stuff that's definitely useful for society.
You don’t understand the allure of having a computer actually do stuff for you instead of being a place where you receive email and get yelled at by a linter?
Perhaps people are just too jaded about the whole "I'll never have to work again" or "the computer can do all my work for me" miracle that has always been just around the corner for decades.
This is about getting the computer to do the stuff we had been promised computing would make easier, stuff that was never capital-H Hard but just annoying. Most of the real claw skills are people connecting stuff that has always been connectable but it has been so fiddly as to make it a full time side project to maintain, or you need to opt into a narrow walled garden that someone can monetize to really get connectivity.
Now you can just get an LLM to learn apple’s special calendar format so you can connect it to a note-taking app in a way that only you might want. You don’t need to make it a second job to learn whatever glue needs to make that happen.
Reading some documentation to figure out a format is something you do once and takes you a few minutes.
Are you a developer? Then this is something you probably do a couple times a day. Prompting the correct version will take longer and will leave you with much less understanding of the system you just implemented. So once it fails you don't know how to fix it.
I love that the posture is I have a problem I need you to fix haha.
I don't need you to fix my problems. I'm reporting that the LLM-based solution beats the dogshit out of the old "become a journeyman on one of 11 billion bullshit formats or processes" practice.
I'm not trying to help you, I'm just wondering how the LLM actually helps you.
You don't need to become a journeyman at understanding a format, you just need to see a schema, or find an open source utility. I just can't comprehend the actual helplessness that a developer would have to experience in order to have to ask an LLM to do something like this.
If I were that daunted by parsing a standardized file format for a workflow, I would have to be experiencing a major burnout. How could I ever assume I could do any actual technical work if I'm overwhelmed by a parsing problem that has out-of-the-box solutions available.
I’ll give you a real concrete example. I had to build an app on the Mac, which needed to be signed. I did not want to learn Apple signing procedures in order to do this. It turns out I did not have to, because I got the robot to learn it. So then I was able to finish doing what it was I intended to do without having to spend an afternoon or a day misunderstanding the Apple signing procedures.
Could I have learned these and become a more virtuous person by knowing apples signing rules? Maybe. What’s much more likely is that I might’ve just stopped doing this rather than deal with that particular difficulty. Instead, I was able to work on other problems that arose in the building of this application.
What I am suggesting to you is that I don’t have to fucking feel bad for being daunted anymore. And neither does anyone else. Folks that want to do that on their own time are free to, but I’m never going back.
There’s a lot of projects for people where this is gonna start to be the operative situation. Folks who might have gotten stuck on an early stumbling block are now just moving ahead and are learning about different and frankly more interesting problems to solve. I’m still beating my head on things, but they are not. “did I get this format just right?”
This shift is an analogous to how we took having to do computer arithmetic out of the hands of programmers in the 80s. There used to be a substantial part of programming that was just a computer arithmetic. Now, almost nobody does that. Nobody in this thread could build a full adder if their life depended on it or produce an accurate sin function. It used to be that that would’ve stopped you cold and trying to answer an engineering problem on a computer. Now it doesn’t. We do not run around telling people that they’re not engineers or that they’re not learning because we have made this affordance.
A full adder is literally one of the easier theoretical computer science concepts, and a sine approximation is a simple Maclaurin series. And yes, if you can't do a simple series expansion, you are not an engineer. You may be a developer, but not an engineer.
These are both first or second year bachelors topics. Just because you're unable to work through simple math problems doesn't mean any semi-competent computer professional would be.
Was it a good thing for anyone writing software which included those things to need to not only work out how they are on a blackboard but how they are on the real machine in question? And how they are on the next machine over?
Do you yearn to return to that world? I suspect most people don't. It's not just knowing your own machine, but any machine the code could run on. It's also not just reaching for some 2nd year bachelor topics when the matter at hand is much more complicated. Where does your sine approximation fail? How do you know? Can you prove that? Does the compiler or the hardware decide to do things behind your back which vitiate any of those claims?
Knowing the answer to that all every time you need a sine is not something 99.99% of engineers need to worry about. IT USED TO BE. But now it's not. No one is going back to that.
I don't know what world you live in, but I still definitely need to know the approximation error of the methods I use.
sin(x) has one of the simplest Maclaurin series:
sin(x) = x - x^3/3! + x^5/5! - x^7/7! ...
For any partial sum of that series, the error is always strictly less than the absolute value of the next term in the series. The fact that this was your example of a "difficult" engineering problem is uh, embarrassing.
For good measure, I would of course fuzz any component involving numerical methods to ensure it stays within bounds. _As any competent engineer would_.
And I absolutely work things out on pen and paper or a white board before implementing them. How else would I verify designs? I'm sure you're aware that fixing bugs is cheapest in the design phase.
Are you living in an alternate reality where software quality does not matter? I'm still living in the world where engineers need to know what the fuck they're doing.
Oh, IEEE 754 double precision floating point accuracy? Rule of thumb is 17 digits. You will probably get issues related to catastrophic cancellation around x=0. As I said earlier the easiest solution is just to measure in this case. You don't really need to fuzz a sine approximation, you can scan over one period and compare against exactly calculated tables. I would probably add a cutoff around zero and move to a linear model if there is cancellation issues.
And if the measurement shows the approximation has too much floating point error, you can always move to Kahan sums or quad precision. This comes up fairly often.
If I really had to _prove_ formally an exact error bound, that would take me some time. This is not something you would be likely to have to do unless you're building software for airplanes, or some other safety critical domain. And an LLM would absolutely not be helpful in that case. You would use formal verification methods.
"Oh, IEEE 754 double precision floating point accuracy?"
Ok, so we do agree! You DON'T want to go back to a system where everyone had to do their own arithmetic just to make a program! That's fabulous. I'm glad that we're in agreement.
It's it SO MUCH NICER to just have the vagaries of one arithmetic we've already agreed upon to deal with, instead of needing to become an expert in numerical analysis just to get along with things.
What does it "do for me"? I want to do things. I don't want a probabilistic machine I can't trust to do things.
The things that annoy me in life - tax reports, doctor appointments, sending invoices. No way in hell I am letting LLM do that! Everything else in life I enjoy.