Or: I’m not going to do this refactor at all, even though it would improve the codebase, because it will be near impossible to ensure everything is correct after making so many changes.
To me, this has been one of the biggest advantages of both tests and types. They provide confidence to make changes without needing to be scared of unintended breakages.
There's a tradeoff point somewhere where it makes sense to go with one or another. You can write a lot of codes in bash and Elisp without having to care about the type of whatever you're manipulating. Because you're handling one type and encoding the actual values in a typesytem would be very cumbersome. But then there are other domain which are fairly known, so the investment in encoding it in a type system does pay off.
They are not. They found it by searching for extensions that had the capability to exfiltrate data.
> We asked Wings, our agentic-AI risk engine, to scan for browser extensions with the capability to read and exfiltrate conversations from AI chat platforms.
This feels like the new version of not using version control or never making backups of your production database. It’ll be fine until suddenly it isn’t.
I have hourly snapshots of everything important on that machine and I can go back through the network flow logs, which are not on that device, to see if anything was exfiltrated long after the fact.
It's not like I'm running it where it could cause mayhem. If I ever run it on the PCI-DSS infra, please feel free to terminate my existence because I've lost the plot.
It’s easy to be against it now because so much content that people recognise as AI is also just bad. If professionals can start to use it to produce content that is actually good, I think opinions will shift.
There are a lot of AI videos that you can very easily tell are AI, even if they are done well. For example, I just saw a Higgsfield video of a kangaroo fighting in the UFC. You can tell it is AI, mainly because it would be an insane amount of work to create any other way. But I think it is getting close to good enough that a lot of people, even knowing it is AI, wouldn't care. Everyone other than the most ardent anti-AI people are going to be fine with this when we have people creating interesting and engaging media with AI.
I think we will look back at AI "slop" as a temporary point in time where people were creating bad content, and people were defending it as good even when it was not. Instead, as you say, AI video will fall into the background as a tool creators use, just like cameras or CGI. But in my opinion it won't be that people can't tell that AI was used at all. Rather, it will be that they won't care if there is still a creative vision behind it.
At least, that is what I hope compared to the outcome where there are no creators and people just watch Sora videos tailored to them all day.
We already have digital IDs in Australia, and it seems like a natural fit for this. The digital ID doesn't need to share much information with social media companies, it just needs to confirm your age. And then we don't need new 3rd-parties holding our personal information.
Also yes, voting is mandatory in Australia. You get a small fine if you don't vote.
It's a very good system. $20 is the right number to get you off the couch, but not so much as to cripple you. There are exceptions if you have a valid reason for not voting. The maximum fine is ~$180 so you can't simply ignore the Elections Commission and hope it goes away.
> These kind of tasks ought to be have been automated a long time ago.
It’s much easier to write business logic in code. The entire value of CRUD apps is in their business logic. Therefore, it makes sense to write CRUD apps in code and not some app builder.
And coding assistants can finally help with writing that business logic, in a way that frameworks cannot.
This tracks with my own AI usage over just this year. There have been two releases that caused step changes in how much I actually use AI:
1. The release of Claude Code in February
2. The release of Opus 4.5 two weeks ago
In both of these cases, it felt like no big new unlocks were made. These releases aren’t like OpenAI’s o1, where they introduced reasoning models with entirely new capabilities, or their Pro offerings, which still feel like the smartest chatbots in the world to me.
Instead, these releases just brought a new user interface, and improved reliability. And yet these two releases mark the biggest increases in my AI usage. These releases caused the utility of AI for my work to pass thresholds where Claude Code became my default way to get LLMs to read my code, and then Opus 4.5 became my default way to make code changes.
2. Some people have become very tied to the memory ChatGPT has of them.
3. Inertia is powerful. They just have to stay close enough to competitors to retain people, even if they aren’t “winning” at a given point in time.
4. The harness for their models is also incredibly important. A big reason I continue to use Claude Code is that the tooling is so much better than Codex. Similarly, nothing comes close to ChatGPT when it comes to search (maybe other deep research offerings might, but they’re much slower).
These are all pretty powerful ways that ChatGPT gets new users and retains them beyond just having the best models.
All of my family members bar one use ChatGPT for search, or to come up with recipes, or other random stuff, and really like it. My girlfriend uses it to help her write stories. All of my friends use it for work. Many of these people are non-technical.
You don’t get to 100s of millions of weekly active users with a product only technical people are interested in.
To me, this has been one of the biggest advantages of both tests and types. They provide confidence to make changes without needing to be scared of unintended breakages.
reply