its really possible to top out on some of these things. once you're at the point where you're like 'I could try to do this professionally, but that would ruin the fun of it and its a really tough market'
plus, dont forget about the substantial benefits from cross-training
if you find something thats really your thing, thats great, but the goal is to interest and engage yourself. why is it that important to demand the time to to eek out that last 5%
its also possible to actually blow through a field. the scope of coding projects I can just spit out as rote are well past what anyone is willing to adopt. what would be the point of training any further. there's really a lot more to learn about machining.
where else are you going to find customers that are so sticky it will take years for them to select another solution regardless of how crappy you are. that will staff teams to work around your failures. who, when faced with obvious evidence of the dysfunction of your product, will roundly blame themselves for not holding it properly. gaslight their own users. pay obscene amounts for support when all you provide is a voice mailbox that never gets emptied. will happily accept your estimate about the number of seats they need. when holding a retro about your failure will happily proclaim that there wasn't anything _they_ could have done, so case closed.
I love the model, it's nice to be able to generate things parametrically instead of grabbing knots with the mouse. so I use scad pretty often.
but it has real problems -
the language is weird and unfortunate. not anything super fatal, just the obvious product of evolution that would be more cohesive if it were architected wholesale
epsilons are really unfortunate. you have to expect that after getting what you want in the whole, you're going to have to scan over the whole thing and look for cracks or collision where there shouldn't be
performance is quite sad. here you are happy going back and forth between the view and text windows, but as you go on, it starts taking .. minutes.. to update the view once you have a reasonably complicated geometry
high-level operators would also be nice. I made the mistake of using a thread library once, not only did that make my model unrenderable, there was so much noise in the model and the manufacturing process I had to make 3 expensive test prints in sintered nylon to get the fit right. (I'm thinking an annotation on a cylinder that says 'standard 1mm thread')
It actually renders things incredibly fast if you get the nightly version and set the backend to Manifold. It is probably 100x faster (!!). In fact it renders so fast that I put a render() command at the top of my hierarchy so that everything just renders all the time, it’s faster and more performant. I make incredibly complex models with it too, with hundreds of holes, complex svg files with text in them, etc.
It does it every time I save the file instead, basically the regular openscad workflow of update-on-save, but instead of doing a preview it’s just full rendering all the time.
its not. I've been in a few shops that use RDS because they think their time is better spend doing other things.
except now they are stuck trying to maintain and debug Postgres without having the same visibility and agency that they would if they hosted it themselves. situation isn't at all clear.
One thing unaccounted for if you've only ever used cloud-hosted DBs is just how slow they are compared to a modern server with NVME storage.
This leads the developers to do all kinds of workarounds and reach for more cloud services (and then integrating them and - often poorly - ensuring consistency across them) because the cloud hosted DB is not able to handle the load.
On bare-metal, you can go a very long way with just throwing everything at Postgres and calling it a day.
This is the reason I manage SQL Server on a VM in Azure instead of their PaaS offering. The fully managed SQL has terrible performance unless you drop many thousands a month. The VM I built is closer to 700 a month.
Running on IaaS also gives you more scalability knobs to tweak: SSD Iops and b/w, multiple drives for logs/partitions, memory optimized VMs, and there's a lot of low level settings that aren't accessible in managed SQL. Licensing costs are also horrible with managed SQL Server, where it seems like you pay the Enterprise level, but running it yourself offers lower cost editions like Standard or Web.
I use Google Cloud SQL for PostgreSQL and it's been rock solid. No issues; troubleshooting works fine; all extensions we need already installed; can adjust settings where needed.
its more of a general condition - its not that RDS is somehow really faulty, its just that when things do go wrong, its not really anybody's job to introspect the system because RDS is taking care of it for us.
in the limit I dont think we should need DBAs, but as long as we need to manage indices by hand, think more than 10 seconds about the hot queries, manage replication, tune the vacuumer, track updates, and all the other rot - then actually installing PG on a node of your choice is really the smallest of problems you face.
this term can be used at a couple different points (including mappings from physical addresses to physical hardware in the memory network), but a PCI BAR is a register in the configuration space that tells the card what PCI host addresses map to internal memory regions in the card. one BAR per region.
the PCI BARs are usually configured by the driver after allocating some address space from the kernel.
DRAM BARs in the switching network are generally configured by something running at the BIOS level based on probes of memory controllers and I2C reads from the DIMMS to find out capacity.
I don't have a problem. if they get a little funky I just sand them down. and let them soak in food-grade mineral oil for a while. same with cutting boards and butcher block tables.
one question that always plagues me when we talk about mixing manual and automatic memory systems is...how does it work? if we have a mixed graph of automatic and manual objects, it seems like we dont have a choice except to have garbage collection enabled for everything and make a new root (call it the programmer) that keeps track of whether or not the object has been explicitly freed.
since we still have the tracing overhead and the same lifetimes, we haven't really gained that much by having manual memory.
D's best take at this is a compile-time assert that basically forbids us from allocating GC memory in the affected region (please correct me if I'm wrong), but that is pretty limited.
does anyone else have a good narrative for how this would work?
There are many automatic memory management systems ranging from the simple clearup of immutable systems (https://justine.lol/sectorlisp2/), to region allocation, to refcounting with cycle collection, and the full-fat tracing.
I'd have thought that allocating a block of memory per-GC type would work. As-per Rust you can use mainly one type of GC with a smaller section for eg. cyclic data allocated in a region, which can be torn down when no longer in use.
If you think about it like a kernel, you can have manual management in the core (eg. hard-realtime stuff), and GC in userland. The core can even time-slice the GC. Forth is particularly amenable as it uses stacks, so you can run with just that for most of the time.
> Real-Time Java extends this memory model to support two new kinds of memory: immortal memory and scoped memory. Objects allocated in immortal memory live for the entire execution of the program. The garbage collector scans objects allocated in immortal memory to find (and potentially change) references into the garbage collected heap but does not otherwise manipulate these objects.
> Each scoped memory conceptually contains a preallocated region of memory that threads can enter and exit. Once a thread enters a scoped memory, it can allocate objects out of that memory, with each allocation taking a predictable amount of time. When the thread exits the scoped memory, the implementation deallocates all objects allocated in the scoped memory without garbage collection. The specification supports nested entry and exit of scoped memories, which threads can use to obtain a stack of active scoped memories. The lifetimes of the objects stored in the inner scoped memories are contained in the lifetimes of the objects stored in the outer scoped memories. As for objects allocated in immortal memory, the garbage collector scans objects allocated in scoped memory to find (and potentially change) references into the garbage collected heap but does not otherwise manipulate these objects.
> The Real-Time Java specification uses dynamic access checks to prevent dangling references and ensure the safety of using scoped memories. If the program attempts to create either 1) a reference from an object allocated in the heap to an object allocated in a scoped memory or 2) a reference from an object allocated in an outer scoped memory to an object allocated in an inner scoped
memory, the specification requires the implementation to throw an exception.
There are some interesting experiments going on in the OCaml world that involve what they call 'modes', essentially a second type system for how a value is used separate from what it is. One goal of modes is to solve this problem. It ends up looking a bit like opting-in to a Rust-style borrow-checker for the relevant functions
Aren’t fec codes already used in satellite transmissions? I recall reading the patents around fec codes had something to do with the satellite industry.
definitely. that's the only reason I know about them. usually you have to do a little more, because the errors are very bursty, so you have to use very wide windows or interleave in order to spread the errors out so that the redundancy can cover them.
if you're saying 'thats a link layer problem' then I agree, but it would be better to change your link level encoding strategy than to just start sending multiple copies at the transport layer.
plus, dont forget about the substantial benefits from cross-training
if you find something thats really your thing, thats great, but the goal is to interest and engage yourself. why is it that important to demand the time to to eek out that last 5%
its also possible to actually blow through a field. the scope of coding projects I can just spit out as rote are well past what anyone is willing to adopt. what would be the point of training any further. there's really a lot more to learn about machining.
reply