Hacker Newsnew | past | comments | ask | show | jobs | submit | wiether's commentslogin

Obsidian is plain Markdown and JSON files.

There can't be a will from the devs to make it hard to sync.

It's just that unlike git or Dropbox or whatever, that are just generic "syncing" tools, Obsidian Sync has been built to provide the best experience with Obsidian.


I'm talking more about the plugin architecture not about the file format or third-party applications. sync plugins seems to be pretty limited compared to what's offered for a subscription.

someone reverse engineered obsidian sync a couple years ago, but obsidian ended up “patching” it. Saw some recent discussion on here about it: https://news.ycombinator.com/item?id=44768641

Seems fair to me.

Obsidian Sync has always been presented as a paid add-on, here to provide income for the company building Obsidian and giving it for free.

If they provided a direct BYOS(ync/erver) mechanism, less people would pay for the add-on, which is their source of income.

Instead, they let you use your own sync mechanism by only relying on text files.

I understand why some people could get upset about this, but they've always been transparent :

- no proprietary format ; can migrate at anytime without effort

- free but closed-sourced software

- add-ons for income


I’ve been using Obsidian with Dropbox sync for years. What’s so special about the Obsidian sync?

Obsidian with Dropbox can work on an iOS device?

With Obsidian Sync you manage everything directly in Obsidian: sync status, activity, history, selective sync...

And "it just works" across platform, without having to think about/set something else up.


it's not per-vault is it? I have multiple vaults I'd like to sync selectively (50% of files in one vault for one machine, and 100% on another etc.) No space restrictions?

I only use a single vault, so I'm afraid I can't answer to your question.

So when I talk about selective sync, it's about what is synced within a vault, and more specifically Obsidian settings/plugins...

I don't have the need to selectively sync only some of my vault's content, so never looked into it.

I just know that Obsidian Sync does what I'm expecting it to do.

And to add some context: I'd rather they just add a regular "Obsidian" sub that included vault sync; instead of giving away Obsidian for free, and selling add-ons. Because on itself, Obsidian Sync is quite expensive. If I'm willing to pay that much for that little, it's because, to me, I'm paying first for the development of Obsidian in itself.

But I understand why they wanted to go this way.

I don't know if it is/was the best move; because I see lots of people not willing to use Obsidian just because they are "scamming" people on their expensive Sync add-on.


To enjoy the native ease of use and security of Obsidian Sync as a human user on your devices; while being able to automate things on a server.

Still wondering why using xcancel is not mandatory for sharing Twitter posts on HN

https://xcancel.com/TehKeripo/status/2027171532825571678


For the same reason that it is possible to share e.g. bluesky/facebook/instagram/etc. links. If you don't want to open X links you can use something like Libredirect to make sure all such links go through a proxy, this is what I do.

Don’t they have a rate limit?

Some don’t mind opening Twitter, others can use services like xcancel.


That's a brilliant move from OpenAI.

In the past, people wanting to sign a juicy contract at a FAANG were told to spend hours everyday on Leetcode.

Now? Just spend tokens until you build something that get enough traction to be seen by one of the big labs!


Just <whatever>… to gain lots of traction.

Gaining traction is the tough part.


> There’s a current business model where you can make a basic but useful tool that solves a specific business problem and make money. That’s going to end.

I don't know... Because the tool that solves a specific business problem usually requires tons of business expertise. And when company buy this tool, they mainly do it for the expertise diluted in it.

If they didn't already made their own in-house implementation, it's because they don't want to invest in maintaining the tool that requires expertise outside of their actual business.

Meanwhile, the company building the tool can invest in keeping this expertise because it's financed by the multiple companies paying for the tool.


> see if it falls apart after handling one request per second

Most of the problems you talk about are problems if you intend your software to be used at scale.

If you're building an app for yourself to track your own food habits; why does DB, framework, best practices matters?

People used to do this in an Excel sheet.

Now they can ask Claude to make them a nice UI similar to MFP or whatever.

Data can be stored in a single JSON file.

It's going to take years before they see actual performance issues.

And even though it becomes an issue, right now an AI Agent can already provide a fix and a script to migrate the data.

My only concern really is about security.

But a private VPS only reachable through Tailscale and they're ahead of 99% of the rest.


All your points are valid and I myself use these types of apps (eg. For handling invoices) internally. But, the second your app talks to the internet, you are more likely to shoot yourself in the foot. Look what happened to Clawdbot. Everyone who used it had their instances exposed to the internet.

AI can fix bugs, sure. But every time you ask it to fix the same problem, it will come up with a new solution - usually unnecessarily complex. Will we reach a point where the AI can be its own architect? Maybe. But, I know for a fact that it's not what we have right now.

Right now, AI needs an architect to tell it how it should solve a problem. The real value of software is in the lived human experiences, not just the code. That's why we make certain decisions different than an AI would.

Ask an AI to vibe code an invoice app. It will make some really lovely looking UI - which is what unfortunately people judge an app by - but with a MongoDb backend which is totally not the right solution for the problem. That's what I mean.


> If you're building an app for yourself to track your own food habits; why does DB, framework, best practices matters?

They don't, it's just annoying as shit when things break at the worst time for lack of these "best practices" and you know that the only answer will be "do better". I'll give you an example. Years ago I migrated a lot of my app usage to selfhosted OSS apps for all the reasons one might list them. I did like 80% of what I perceived as the "important best practices". Setup ZFS with redundancy to handle drive failures, a UPS for power interruption, wireguard for secure access, docker for application and dependencies isolation, etc.

But there were always things I just thought "I should probably do that, but later. This is just for me"

It would be the end of the day, I'm tired and on bed wanting to just chill and watch something on my ipad, and what do you know my plex is down, again.

Why does it go down every few days? Now I need to go get a laptop, ssh into my server, docker logs. See a bunch of exceptions. I don't want to debug it today. Just restart it, ok it works again. Go to bed, start watching.

20 minutes in.. I think it's down again.. wtf? get the laptop again, google the error, something about sqlite db on an NFS share not being very stable. All my ZFS storage is only exposed as NFS and SMB share to another machine.. Ok, just restart and hope it works and I'll deal with it latter.

Forget for a couple of days. I'm with a friend as her place and want to watch again, and fuck me I never fixed the sqlite issue, nevermind lets just watch netflix.

Over the weekend, I'm determined to get this fixed. Move the application folder out of NFS on the local machine SSD. It doesn't have redundancy, but it's ok for now. I'll setup an rsync job to copy it to the NFS share in case the SSD fails. I just want to see if it'll be stable.

Few months pass, and it's been pretty stable until I have a power outage. The UPS was there, but the configuration to notify the OS to shutdown broke a while ago and I didn't notice. Files on ZFS are fine, but the some on the local SSD got corrupted and I didn't notice, including plex database. the rsync job just copied the corrupted file over the "backup" file.

It's late at night again, and I just want to relax and watch something and discover this happened. I could try to figure out how to recover it, but it's probably easier to just do a clean scan. It's gonna take hours. Lets just start it and go to sleep.

Later, lets just migrate everything to jellyfin. Have auto upgrade setup because I'm smart. Jellyfin 10.8 updates and unfavorites all the facorited music tracks. "You have backups right". "Well, yes I do. Let me make sure I have an evening cleared so I can setup another instance of jellyfin, run the old backups, export the favorite list, and import it in the new one"... oh there is no way to do that? I guess I can export it to CSV, get a plugin to automate it for me. the plugin hasn't been updated to 10.8 but there is a pull request. ok lets wait. Forget that I setup restic to delete backups older than 30 days. fuck me. I have the CSV somewhere I think. God my `/tmp` is ephemeral and I hope I haven't rebooted since then. phew it's there. fuck me still.

I have worked in managing services for most of my career. I know what I'm doing wrong. I need to setup monitoring, alerts, health checks, 321 backups (not just rsync to a zfs pool) and actually use a backup software that tracks file versions, off site redundancy, dashboards for anomaly detection, scheduled hardware upgrades and checks for memtest, disk health, UPS configuration checks. I know how 3 or 4 9s are achieved in the industry.


Direct webhooks have been removed but you can still use webhooks to send messages to Teams using PowerAutomate.

It's messier to set and maintain but it works as intended and also you can add more things to the workflow.

If you just want a URL to send json to, the new way is awful. But if you want to have more control, now you can.

Sometimes I like the PowerAutomate way, sometimes I hate it...


Same

I don't understand how so few lines can produce so much different things.

And clicking on the background will just create a new random (?) background!

It seems that the aliases are doing a lot of work


The aliases we're tripping me up! I almost understand it now. Not sure what the @lp is doing


In French the adjective follows the name so AI is actually IA.

On AWS S3, you have a storage level called "Infrequent Access", shortened IA everywhere.

A few weeks ago I had to spend way too much time explaining to a customer that, no, we weren't planning on feeding their data to an AI when, on my reports, I was talking about relying on S3 IA to reduce costs...


So we reached a point where the quality of a `piece` of software is decided based on stars on GitHub.

The exact same thing happened with xClaw where people where going "look at this app that got thousands of stars on GitHub in only a few days!".

How is that different than the followers/likes counts on the usual social networks?

Given how much good it did to give power to strangers based on those counts, it's hard not to think that we're going in the completely wrong direction.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: