Hacker Newsnew | past | comments | ask | show | jobs | submit | marak830's commentslogin

Same, I love the idea - but I cannot read the hiragana/katakana (heck even: はい hai yes <- would work well).

Edit: Decided to make my own firefox addon to do it, no worry about daily limits and I can simply update a json file with more words when I feel like I'm remembering things.


Do you perchance have this on Github? I like the idea but would love it for another language, so would be interested in your add-on.

I don't, but I can.

I've just whipped one together and am currently testing it out, I won't have time tonight but check back tomorrow and I should have it up by then. (I'll post a reply to my own comment here)


Huh could this be why I can't login and pushing packages says my account is banned?


I switched 3 weeks ago. (First Ubuntu now the badly named POP!_OS), and couldn't be happier.

Fast, a slight learning curve(took me a weekend), and I'm back gaming and coding regularly.


Done, thank you for the link.


Haha that reminds me, Qbasic using the help file to figure out how to program. Taking apart a HD and getting my fingers pinched between the two bloody strong magnets.

Amazing what you learn when you have no other distraction xD


Ollama is a good one, LM Studio is great for those who are unsure what to do (will help you get a model that fits into your system specs).

If you use open webui(I recommend via docker) you can access your ollama hosted model via the browser on any device on your network. Tailscale will help make that accessible remotely.

I'm currently working on an open source long term memory system designed to work with ollama to help local models be more competitive with the big players, so we are not so beholden to these big companies.


That sounds great — thank you for working on this. I’m not a developer, just curious about AI in general. Local AI feels like the right direction if we want to save energy and water, too. Is your memory system open source?


It will be, I'm applying for a NLNet grant and open sourcing it to non-corporations is one of the requirements. (I need more hardware to develop, already fried one SSD haha)


I'm currently working as a PCM analyst looking at samples to identify asbestos. There is a lot out there still, we are busy every week. (technically PCM doesn't identify asbestos, just the number of fibers during abatement, PLM will identify asbestos but that takes a lot longer to process).


Interesting, I had some vermiculite removed recently and got a response from the ZAI trust that the samples had fibers but they couldn't say specifically that it was asbestos. I assumed that was a legal distinction, it didn't occur to me that it might be from different test methods.


Yeah, for a PCM test we only count 100 fields, and identify the number of fiber end points (upto 2) which with math can give an approx number of fibers/cc2 - helps determine approx how much potential can be in the air (this is usually done during abatement - when it's being cleaned up).

A PLM analyst will use multiple methods to determine if the sample has asbestos, and takes a much longer time.

There are even more expensive tests that can be performed but I'm not so familiar with those.


This is actually along the lines of what I'm working on in my free time at the moment. I am working to extend a local model's memory to allow smaller self-hosted models become a better solution than paying someone else.

Once this is working better, it will allow to extend the abilities of local models without running into the massive issues with context limitations I personally was hitting for self hosted.


It's up in Japan


I use https://hckrnews.com/ so I can see the stories in chronological order. Makes the "front page" effect basically disappear.


Interesting. Thanks!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: