The process is pretty informal, so there are no hard requirements. that said:
- Failures per day: let's say 0-100
- How long to retain records: no hard requirements? I guess a few months at least, some failures are pretty rare
- What's the lifecycle of a failure? Scripts record it, team members investigate it and assign to "root cause".
- Custom scripts:
(1) create ticket per failure
(2) create failure reports (to prioritize work - for example if there were 50 failure reports with root cause of 'github was down' , the priority of "set up github mirror" will get bumped up)
(3) mass-update tickets (for example if github.com is down, there will be few dozens of failed processes because of that)
(4) handle rules for automatic classification (again, if github.com is down, it'd be lovely if I can have a rule: "for the next 48 hours, every ticket which mentions github.com and 503 is auto-assigned to 'github was down' root cause")
- SSO, audit, compliance: nice but not required
- JIRA problems: search sucks. "Find similar ticket" sucks. Rules are missing (or need admin). Even something as simple as "close those 20 tickets and link them all to ABC-1234" is impossible.
- Google sheets: not enough automation. At least I can do "filter rows, copy-paste the 'root cause' field into all of them", and it is pretty fast, but: multi-line outputs don't look good and there are no automation (we did not explore App Script, maybe we should have...)
And yeah, I am getting the feeling this would be a custom job. We have resources in house to do so, but I was hoping there was an existing product. Surely there are people out there who run batch-like jobs and want them to be reliable? Something like data conversion jobs, CI builds, training jobs, etc...
Perhaps it's a good thing for generative AI, I've heard it's pretty good at making websites (and security/availability is not an issue, as this will be internal website not exposed to internet). Or I may revisit Google's App Script...
Thanks for your reply. I suggest looking at Airtable and _maybe_ Linear. They have API and automations. You could likely get AI to rewrite your scripts.
If those don't work, you may have a business case for building it.
I'm a founder and dev looking to for a good problem to solve. If the need could be proven (e.g. 10 people with decision power said they wanted it), I'd consider making it.
Such a nice design on server-side.. and yet on the client side, it uses system address book - something that Google backs up on the server, and many carriers back up too, and many apps (like Whatsapp) save it too.
"Hey, _we_ don't store your contacts, we are good! Instead you have to manage them yourself, and in process share your presumably "secure" Signal contact list with Apple, Google, Facebook phone carriers and everyone else. But it's not on our servers, so we don't care"
this all started with git history rewrite in public project...
> Out of curiosity, where's the statement which explains the git history rewriting? This is the first I'm hearing of the whole thing, but rewriting git history is really suspicious tbh
> We never explained the history rewriting and we aren't obligated to. Git is a distributed VCS other people probably still have the history. We made a statement that it wasn't a supply chain attack (With other members of the greater rust community corroborating) in the now deleted reddit thread.
I think the thing about StepSticks is they all have same (or very similar) pinout - so as long as board is designed for generic "stepstick", you can plug your favorite kind.
Earlier journal log entries mentioned TMC2209_SILENTSTEPSTICK, but most recent schematics removed this designation. Seems like an oversight.
That's no excuse for disbelieving GPS for extended periods of time.
Google Maps gets it right: it tried to keep you on road, but only for a few tens of seconds. After that, if you are in the middle of uncharted territory, it'll show the marker there.
(This is probably because Google Maps can be used for walking/biking too)
Well, when I’m driving in Kyiv, and there is an air raid alert, usually my car navigation starts to derp, and after a few minutes it thinks that it’s suddenly in Lima, Peru.
Not that I mind too much, I know how to get around without navigation.
It does teleport to Peru but it also fast-forwards time to about a year into the future, which caused my car to think its overdue for that oil change. It even synced that back to the headquarters and I got an email asking me to take it to service.. (and arriving there on the wrong side of the Dnieper, I just decided to wait it out)
Wish we could put it into a manual mode where you just reset it's position once and then it updates based on wheel encoders & snapping to roads.
The technology should be resilient against GPS spoofing. If it “knows” it never left the mountain road, it’s not crazy to design it to reject an anomalous GPS signal, which might be wrong or tampered with.
I think the likelihood of that happening is significantly less than the likelihood that a car took a new road or other path not show in the cars mapping data.
>(This is probably because Google Maps can be used for walking/biking too)
Please don't do that. The map is simply not good enough and does not have enough context (road quality, terrain, trail difficulty) for anything but very causal activity. Even then I highly recommend to use a proper map, electronic or paper.
It has a lot more map data accessible and you can even overlay National Park Service maps, land ownership, accurate cell service grids, mountain biking trails, weather conditions and things like that.
Disclaimer: Just because you see a route on a map, digital or paper, does not mean it is passable today. Or it may be passable but at an extremely arduous pace.
We used the walking directions for dual sport motorcycles once. It was pretty nice. We did have a few places where it became sketchy. Those and maybe more places would be sketchy for walking too. Not that google maps could do much about it. Terrain is a living thing. These were mostly huge cracks in the earth due to rain water.
Trail? Terrain? I use it for walking for 10-20mins around a (mostly flat) city and I expect that’s what 90% of people use it for, the comment didn’t mention hiking
It depends what you are doing but for hill walking in Italy I found the footpathapp.com app good. There are no decent paper maps in the area I go and Google maps are also rubbish for local paths but the app kind of draws in paths based on satellite images I think and you can draw on it to mark the ones you've been on.
By default (1) captures stdout and stderr of all processes and (2) create tty for processs stdout.
Those are really bad defaults. The tty on stdout means many programs run in "interactive" rather then "batch" mode: programs which use pager get output truncated, auto-colors may get enabled and emit ESC controls into output streams (or not, depending on user's distro... fun!). And captured stderr means warnings and progress messages just disappear.
For example, this hangs forever without any output, at least if executed from interactive terminal:
from sh import man
print(man("tty"))
Compare to "subprocess" which does the right thing and returns manpage as a string:
Can you fix "sh"? sure, you need to bake in option to disable tty. But you've got to do it in _every_ script, or you'll see failure sooner or later. So it's much easier, not to mention safer, to simply use "subprocess". And as a bonus, one less dependency!
(Fun fact: back when "sh" first appeared, everyone was using "git log" as an example of why tty was bad (it was silently truncating data). They fixed it.. by disabling tty only for "git" command. So my example uses "man" :) )
Apollo Guidance Computer was 2MHz, ~72 KB ROM, ~4 KB RAM
The comparison might be up to 10x different due to more efficient architecture and different MIPS/MHz ratio, but it does not change much, since the differences are so dramatic.
(This is based on links in the podcast description, which I assume what they talked about. Those pretty new keyfobs, older ones might have something like nRF24LE01, which is only 16 MHz, 18 KB Flash, 1KB RAM)
If an app requires a permission, how does OS know that it's OK to grant it? For example, I want to backup my system, so I install app which needs a permission called "bypass any file access control and let me read every file". How does OS know it's legitimate and not malware trying to steal data?
It could be "this requires special digital signature from OS manufacturer" -> then the private key of this digital signature is a "god object"
It could be "this requires confirmation from the physically present user" -> then you basically have passwordless sudo
It could be "this requires users pin/password/biometrics" -> then you have regular sudo
Either way, there is some source of authority in here, even if it's called "root key" or "user pin" instead of "root account".
Let me preface this by saying it is wildly impractical, but you could boot into a separate, minimal OS that mounts your primary OS disk and manages those permissions.
For an extra layer, have the “god mode OS” installed on physically read-only media, and mount the primary OS in a no-exec mode.
Regular OS can’t modify permissions, and the thing that can modify permissions can’t be modified.
It’s too clunky for home use, but could probably be used for things like VM images (where the “god mode OS” is the image builder, and changing permissions would require rebuilding the image and redeploying).
Some BSDs have concept of "securelevel" - a global setting that could be used to permanently put the system up into the mode which restricts certain operations, like writing to raw disks or truncating logs.
The idea is if you want to modify the the system, you reboot into single-user mode and do what you need. It does not start up ssh / networking by default, so it is accessible to local console only.
And of course plenty of smaller MCUs (used in IoT devices) can be locked down to prevent any sort of writing to program memory - you need an external programming adapter to update the code. This is the ultimate security in some sense - no matter what kinds of bugs you have, a power cycle will always restore system into pristine state (*unless there is a bug in settings parser).
>then the private key of this digital signature is a "god object"
You could instead require the app to be part of the OS. The next gotcha would from you I imagine is that the build farm for the next OS update is a god object and at that point I think this is a meaningless tangent. I'll concede and say you have to trust your OS creator. But you always have to trust your OS creator for any OS.
>then you basically have passwordless sudo
If sudo couldn't be used from other programs / she'll scripts and doesn't give access to a god account, but instead did simple things like let you use ping, then that seems fine to me. But why require people to manually wrap programs when it could be handled automatically.
>Either way, there is some source of authority in here
Sure, but it's a system that's much better than sudo.
> the build farm for the next OS update is a god object
This is a very interesting question! It may sound meaningless to you, but modifying firmware images is pretty common when trying to modify locked-down hardware. As in, "I'll unpack the firmware image, set root password and enable telnet, then flash it back". So no, the build farm is not a god object. Whatever controls firmware updates is. Can any app initiate it? Or does it need user's password? Or maybe physical presence? Or a private key that only select people have?
> If sudo couldn't be used from other programs / she'll scripts and doesn't give access to a god account.. But why require people to manually wrap programs..
So, you mean like what "polkit"? This is what systemd is doing - instead of requiring "sudo", commands like "systemctl start SOMETHING" will handle privilege escalation themselves. For example on my computer, running this in terminal pops-up interactive dialog asking for my password. In theory, you can have the whole suite of programs - "secure-cp", "secure-mv", "secure-edit" (see also: "sudoedit"), "secure-find", etc... But it seems pretty wasteful, no? Sure, most common actions (installing/removing apps, configuring networks) can get its own nice privilege-escalating wrappers, but there are many advanced tasks that user can do, and it's much easier to make (and audit) a single "sudo" than hundreds of random scripts.
(Unless you want to have a fully locked-down system where the only OS creator can decide which privileged actions are allowed. Those things exists and are pretty popular: Android and iOS. They are also only usable for a very specific purposes, basically as a remote terminals to server machines running unrestricted OSes without such limitations)
> You could instead require the app to be part of the OS.
That almost sounds like you're advocating for the abolishment of third party or user-made apps that can make changes to the system without the approval of the manufacturer.
This is about being able to read any file on the system including things like the user's bank authentication tokens. No 3rd party developers should be able to read bank authentication tokens. The OS should create a safer API for 3rd parties to use for the use case they want.
Doesn't this just move the bucket: which processes should the OS grant access to that API?
In any case, if the purpose is to make a backup of the system, it seems the possibility to read all and every file as original as possible seems rather critical, in particular if we want to take advantage of e.g. content-based addressing -based deduplication in the backup application. And we in any case want to restore that backup to an empty computer, so there really are no places to hide the encryption keys in such a way that they cannot be read from the backup.
Raymond's posts are always fun to read, but it sometimes he focuses more on the "proper" methods, and does not even acknowledge that there are hacky workarounds.
Like for this case - sure, you cannot redefine the standard output handle, but that's not what the customer asked for, is it? They said "read" and I can see a whole bunch of ways to do so - ReadConsoleOutput + heuristic for scrolling, code inject into console host, attach debugger, set up detour on logging function, custom kernel module...
To be fair, as a MS support person, it's the exactly right thing to do. You don't want the person to start writing custom kernel module when they should redirect stdout on process start instead. But as a random internet reader, I'd love to read all about hacky ways to achieve the same!
> Raymond's posts are always fun to read, but it sometimes he focuses more on the "proper" methods, and does not even acknowledge that there are hacky workarounds.
Nor should he, IMO. Hacky workarounds are almost always a terrible idea that will bite you in the ass someday.
As a hacker, I'm sorry, reverse engineer hacky workarounds is what I do. When I want to read stdout of a malware process I'm not going to ask a developer nicely, in going to grab my trusty debugger or API monitor.
But yeah, for production quality software hacks are the very last resort. It's still fun and enlightening to know them, though.
Hacky workarounds aren't rare exceptions; they're the plumbing of modern software. Anti-cheat and antivirus tools only work because they lean on strange kernel behaviors. Cloud platforms ship fixes that rely on undefined-but-stable quirks. Hardware drivers poke at the system in ways no official API ever planned for.
Yeah, they're ugly, but in practice the choice isn't between clean and hacky; it's between shipping and not shipping. Real-world software runs on constraints, not ideals.
On the other hand, everything you ship outside of a clearly established golden path is a maintenance burden that piles and piles and piles. And these maintenance burdens tend to gradually slow the org down until they cause rather catastrophic failures, usually out of security or hardware (read: fire) incidents. Or HR reasons because people figure there are better places to fight fires.
Had a WPF touch interface application that would latch on when a person; presses, holds, and slides their finger off the screen. Highly unacceptable when it controls a machine that could remove a limb.
Only fix was to write a custom touch screen event handler that overrides the built in one by Microsoft.
I would love to have a _proper method_ and pull out my _hacky_ method that prevents the removal of a person's limb.
- Failures per day: let's say 0-100
- How long to retain records: no hard requirements? I guess a few months at least, some failures are pretty rare
- What's the lifecycle of a failure? Scripts record it, team members investigate it and assign to "root cause".
- Custom scripts:
(1) create ticket per failure
(2) create failure reports (to prioritize work - for example if there were 50 failure reports with root cause of 'github was down' , the priority of "set up github mirror" will get bumped up)
(3) mass-update tickets (for example if github.com is down, there will be few dozens of failed processes because of that)
(4) handle rules for automatic classification (again, if github.com is down, it'd be lovely if I can have a rule: "for the next 48 hours, every ticket which mentions github.com and 503 is auto-assigned to 'github was down' root cause")
- SSO, audit, compliance: nice but not required
- JIRA problems: search sucks. "Find similar ticket" sucks. Rules are missing (or need admin). Even something as simple as "close those 20 tickets and link them all to ABC-1234" is impossible.
- Google sheets: not enough automation. At least I can do "filter rows, copy-paste the 'root cause' field into all of them", and it is pretty fast, but: multi-line outputs don't look good and there are no automation (we did not explore App Script, maybe we should have...)
And yeah, I am getting the feeling this would be a custom job. We have resources in house to do so, but I was hoping there was an existing product. Surely there are people out there who run batch-like jobs and want them to be reliable? Something like data conversion jobs, CI builds, training jobs, etc...
Perhaps it's a good thing for generative AI, I've heard it's pretty good at making websites (and security/availability is not an issue, as this will be internal website not exposed to internet). Or I may revisit Google's App Script...
reply