I didn't realize that Newsboat has Miniflux support! That means you can read your feeds from the console and keep everything in sync with the web interface or any other client software. I will try this out later.
Recently I set up a miniflux instance with the intention of using it as a backend for newsboat. I'd been using newsboat on its own for a while and I liked it, but wanted my feeds accessible on other devices. Turns out I liked miniflux's UI so much that I didn't even bother finishing setting up newsboat to be its frontend. Now I'm 100% miniflux, using the web app on both desktop and mobile.
Newsboat has some excellent features that make it extensible, directly and indirectly.
It just uses a flat file for it's RSS feeds, which means you can write your own scripts to set up categorisation or Similar. I have one that takes the name of a YouTube channel and adds the somewhat hidden RSS feed.
It allows filtering, so I can only see the videos on topics I'm interested in - amazing for the channels that put out loads of spam but the occasional topic/series I like. This puts it above other YouTube client apps.
And all the results are stored in a SQLITE database, so if you need to migrate, sync or query it's very easy.
The one thing I'd want is perhaps to be able to edit or update titles in line so that I can format them or use scripts to get more info (e.g. displaying upvotes from Reddit or topic tags from BBC news articles). Currently it overwrites everything on update, which is fair but limiting.
Boy do I feel stupid now.
I have been using newsboat for almost five years now, but it never occurred to me that I could apply filtering rules for the exact use case you described.
It might be a fool's errand though if the spam is not obvious enough to catch with a regex.
I use newsboat exclusively for podcasts. It includes a downloader "podboat" that does the file transfers. There's the external "podbit", that builds up on and/or replaces podboat as full audio player. It starts being fun when the underlying file sync between devices is automatic.
Reading articles via RSS didn't stick around for me for reasons others noted too (only part of the article content inside the feed). With podcasts that is no concern.
> All who posted in this thread. If there is functionality that is not in Saved Search that you would like to see, please let me know. No promises that it can or will be built, but I'm willing to bring ideas back to the team. Please don't state reinstate RSS, it will be a waste of all of our time.
One of the biggest issues with RSS these days is 'click here to read the rest of the article', only to be redirected to Ars Technica where a video starts auto-playing.
And this is SO wasteful. The amount of megabytes wasted on limited plans...
I linked newsboat with elinks. Actually, it's felinks, a fork of elinks which supports Python scripting. I also have a couple of small scripts and a bunch of rules for all major news sites that strip the HTML page to just the article. At some point I wanted to hook it up with the readability Python package, but it works pretty well already. Plus, this way all the links (in particular to images, which open with "feh" on demand) are preserved.
So on the feeds that are like you describe, I just hit "o" and can read the rest of the article without any noise 99% of the time right there in the terminal.
Offpunk allows to subscribe to a RSS and read the full html page directly in console, including browsing the website and having pictures displayed as "sixels" (works best with the terminal "kitty"). It also allows to read all of that offline and get all the new posts in a "tour".
This won't work for every site, but Ars has a solution for you: if you subscribe to one of their paid tiers, you get access to an ad-free, full-text RSS feed.
This kind of thing is why I like RSS as a delivery mechanism: while the feed itself is an open standard and can be consumed by any properly-configured reader, services can offer additional features and gate access server-side based on a per-user token.
You can mitigate this issue by using something like ftr.fivefilters.net or morss.it which will attempt to scrape the website for you and create a "full text" feed. Obviously not as good as the original feed being complete but it is often preferable to just the summaries.
I understand why the entire article isn't necessarily in the RSS feed, but it sort of defeats the purpose of having an RSS reader in my mind.
Normally I quickly drop the feeds that only includes the first paragraph. There is one local news site that even worse, they host a series of blogs, you only get the title and the first paragraph, but you're also only able to subscribe to ALL the blogs, not just the one you're actually interested in.
I’m of the opposite mind. For me the purpose of the feed is to get a look at the first paragraph so I can decide whether to press "o" and see the article (I discovered newsboat a few months ago and adopted it).
Agreed-ish. I prefer the abstract/summary rather than first paragraph (though first paragraph is informally often the same thing, nothing prevents abstracts/summaries fcom having multiple paragraphs).
It just feels like a lot of the internet simply isn't designed for RSS anymore. A lot of people forget it still exists, or think that it died out ages ago. I remember the YouTuber Tom Scott remarking in one of his videos that he misses RSS not realising of course that it is still alive.
I use it on a tiny cloud instance and refresh the feeds at regular intervals with cron. This lets me tune out from the news cycle for days or even weeks at a time. When I have to appear like I've not been living under a rock, I can catch up fairly quickly.
https://github.com/newsboat/newsboat/blob/master/CHANGELOG.m...