Hacker Newsnew | past | comments | ask | show | jobs | submit | nh2's commentslogin

Thin tape

Smudged adhesive, sticky button, a dislocated tape, dirt, ugliness, etc.

I also find the lack of ports in a Framework frustrating.

My Thinkpad has

    USB-A
    USB-A
    USB-A
    USB-C
    HDMI
    Ethernet
    SD
    Charging
and a Framework has only half of that.

Most of these are used at least once per day.

I'm hoping for third party chassis offerings to solve this.


Do you have an article about that?

Is it technically possible to obtain a wildcard cert from LetsEncrypt, but then use OpenSSL / X.509 tooling to derive a restricted cert/key to be deployed on servers, which only works for specific domains under the wildcard?


Why not add this approach to postgres as a "JSONL3" type?

It'd be nice to update postgres JSON values without the big write amplification.


JSON columns shine when

* The data does not map well to database tables, e.g. when it's tree structures (of course that could be represented as many table rows too, but it's complicated and may be slower when you always need to operate on the whole tree anyway)

* your programming language has better types and programming facilities than SQL offers; for example in our Haskell+TypeScript code base, we can conveniently serialise large nested data structures with 100s of types into JSON, without having to think about how to represent those trees as tables.


You do need some fancy in-house way to migrate old JSONs to new JSON in case you want to evolve the (implicit) JSON schema.

I find this one of the hardest part of using JSON, and the main reason why I rather put it in proper columns. Once I go JSON I needs a fair bit of code to deal with migrartions (either doing them during migrations; or some way to do them at read/write time).


Since OP is using Haskell, the actual code most likely won’t really touch the JSON type, but the actual domain type. This makes migrations super easy to write. Of course they could have written a fancy in-house way to do that, or just use the safe-copy library which solves this problem and it has been around for almost two decades. In particular it solves the “nested version control” problem with data structures containing other data structures but with varying versions.

Yes, that's what we do: Migrations with proper sum types and exhaustiveness checking.

Which part of the nix language looks like Perl?

I actually find the language simple and easy to learn: It's just untyped lambda calculus with dicts and lists.

(I, too, would like static types though.)


I'm not them, but TIMTOWTDI is a bad thing, and Nix suffers from it. That's the main Perl-ism I can think of.


I'd like to have a local, fully offline and open-source software into which I can dump all our Emails, Slack, Gdrive contents, Code, and Wiki, and then query it with free form questions such as "with which customers did we discuss feature X?", producing references to the original sources.

What are my options?

I want to avoid building my own or customising a lot. Ideally it would also recommend which models work well and have good defaults for those.


This is why I built the Nextcloud MCP server, so that you can talk with your own data. Obviously this is Nextcloud-specific, but if you're using it already then this is possible now.

https://github.com/cbcoutinho/nextcloud-mcp-server

The default MCP server deployment supports simple CRUD operations on your data, but if you enable vector search the MCP server will begin embedding docs/notes/etc. Currently ollama and openai are supporting embeddings providers.

The MCP server then exposes tools you can use to search your docs based on semantic search and/or bm25 (via qdrant fusion) as well as generate responses using MCP sampling.

Importantly, rather than generating responses itself, the server relies on MCP sampling so that you can use any LLM/MCP client. This MCP sampling/RAG pattern is extremely powerful and it wouldn't surprise me if there was something open source that generalizes this across other data sources.


Would love to see someone build an example using the offline wikipedia text.


Given the full text of Wikipedia is undoubtedly part of the training data, what would having it in a RAG add?


High precision recall.

It may also be cheaper to update the source (Wikipedia) with new information than to update the model.


Please do not use second resolution mtime (cannot represent the high accuracy mtime that modern OSs use, so packing and unpacking , or causes differences eg in rsync), or build anything new using DEFLATE (it is slow and cannot really be made fast).


It is still not a proper fix. It is still busy-looping 100% CPU.

Given that Github Actions is quite popular, probably wasting large amount of energy.

But probably good at generating billable Actions minutes.

One can only hope that not many people use sleeps to handle their CI race conditions, as that itself is also not a proper fix.


Clearly the job for a microservice. Accept number of seconds to wait as url, return content after that many seconds. Then just use curl in runner.


Brb founding a SaaS startup. I’ll call it cloudsleep dot io of course. After our Series B I’ll buy the .com.

Only task to do before lining up investors is how can I weave AI into our product?


Describe the task you're waiting for as text, and let an LLM pick the number of seconds for each request. More expensive the better model you clearly need for this. There, your AI pitch.


Excellent, welcome aboard, Chief Product Officer!


Retries won’t work in that case. Would be better to have two endpoints: get the time in x seconds and wait until time passed. That way retrying the wait endpoint will work fine and if time hasn’t elapsed it can just curl itself with the same arguments.


If you have curl (but not sleep) sure, but if not maybe you can use bash's wacky /dev/tcp. The microservice could listen on ports 1 through 64k to let you specify how many seconds to sleep.


Yeah, definitely not a proper fix.

Maybe a more serious fix is something like "read -t $N". If you think stdin might not be usable (like maybe it will close prematurely) this option won't work, but maybe you can open an anonymous FD and read from it instead.


    while wait && [[ $SECONDS -lt $1 ]]; do
      read -t $(($1 - SECONDS))
    done <><(:)
although if you're not too concerned about finishing early if a signal interrupts, probably

    { wait; read -t $1; } <><(:)
would be fine. You want the wait because otherwise bash won't reap the zombie from the process substitution until after the read times out.

Interestingly, it does reap on a blocked read without -t, so potentially the behaviour on -t would be considered a bug rather than as-designed.

There's also a loadable sleep builtin supplied with bash which calls into the internal fsleep() so should be reliable and without forking.


It's just all hacks. It should use `sleep`, period.

Sleeping is an OS scheduling task. Use the OS syscall that does that.

As is suggested on the Github issue that Microsoft has been ignoring for half a year.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: