Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
PostgreSQL 9.2.4, 9.1.9, 9.0.13 and 8.4.17 released (postgresql.org)
252 points by edwinvlieg on April 4, 2013 | hide | past | favorite | 99 comments


The commit that fixes it with a few more details: http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitd...

    An oversight in commit e710b65c1c56ca7b91f662c63d37ff2e72862a94 allowed
    database names beginning with "-" to be treated as though they were secure
    command-line switches; and this switch processing occurs before client
    authentication, so that even an unprivileged remote attacker could exploit
    the bug, needing only connectivity to the postmaster's port.  Assorted
    exploits for this are possible, some requiring a valid database login,
    some not.  The worst known problem is that the "-r" switch can be invoked
    to redirect the process's stderr output, so that subsequent error messages
    will be appended to any file the server can write.  This can for example be
    used to corrupt the server's configuration files, so that it will fail when
    next restarted.  Complete destruction of database tables is also possible.

    Fix by keeping the database name extracted from a startup packet fully
    separate from command-line switches, as had already been done with the
    user name field.
    
    The Postgres project thanks Mitsumasa Kondo for discovering this bug,
    Kyotaro Horiguchi for drafting the fix, and Noah Misch for recognizing
    the full extent of the danger.
    
    Security: CVE-2013-1899


That is a fantastic commit message. I try and convince anyone I work with that they should write commit messages like this. Slowly, but surely, they come around.


And the author is a fantastic coder/developer, really!

https://en.wikipedia.org/wiki/Tom_Lane_%28computer_scientist...


What gets me about Tom Lane is that he's _so active_ on the mailing list(s).

It seems that every time I search the archives regarding an issue I'm having: I find a reply authored by Tom Lane.


Tom committed the fix, he didn't author it.


Who writes commit messages for these kinds of user-submitted patches? The author or the committer?


Generally the committer writes the commit message. There is occassionally discussion of specific wording ahead of time.


Usually the committer. Most patches sent to the mailing list does not contain a commit message.


The author of the commit message I think is what the op was implying.


Yep this is what I meant. I could have chosen my words in a better way though, since we were talking about a git commit. Having said that quoting commit message 2nd last paragraph:

The Postgres project thanks Mitsumasa Kondo for discovering this bug, Kyotaro Horiguchi for drafting the fix, and Noah Misch for recognizing the full extent of the danger.

We can't exclude without further info that Tom contributed to the fix, just saying.

Knowing him after a few years spent following postgres dev process I bet on co-authoring at least :)


I'm really happy that the PostgreSQL team was able to fix this so quickly and it does appear to be a massive security issue. However, on the flip side, in 13+ years of web development work, I've never really seen a database name beginning with "-".


I don't think you have to have a database starting with - for the bug to work.


No, but thankfully you do need postgres to be accessible remotely.


Which is not an uncommon situation actually. I've only started surveying the Internet for PostgreSQL for a bit more than a day and I've already discovered more than a hundred thousand (168,031) remotely-accessible PostgreSQL instances: http://www.shodanhq.com/search?q=port%3A5432


I'm surprised this is so common. I've never set up any database accessible to the public-- I've already got to worry about securing the public-facing web server, why add another vector for attack?


Even without being publically accessible, it's a DBA's nightmare scenario. There are plenty of corporate data warehousing environments in which many hundreds of employees have direct access to the database. This exploit would allow any of those employees to drop tables without exposing their credentials.


In one case, a large service provider is specifically providing that kind of database access to their customers.

And to be fair: http://www.shodanhq.com/search?q=mysql


You get a lot more results if you search for the service/ port directly! http://www.shodanhq.com/search?q=port%3A3306


Or you'll need such a weird design that user input is translated to database names.

I suspect that's why they weren't able to tell people that if your db port is secure you're safe.


Or have a Bad Guy in your network...


I firewall all traffic so not only is psql not open remotely (the users are tied to hosts), but the traffic never even makes it there unless you are coming from an authorized machine. It would take a really bad guy on the network to cause trouble here and at that point the database is not my biggest concern.


This bug wouldn't be such a big deal if such a name was a requirement for the exploit.


Can someone follow up on this? Do you need the db name to already begin with "-" to exploit this?

EDIT: No, the problem is in parsing, not the existing names: https://news.ycombinator.com/item?id=5492508


Thanks for clearing that up. I read the commit notes but it wasn't 100% clear and I was quite sure there's more to it considering this was all so hush-hush!


"Heroku was given access to updated source code which patched the vulnerability at the same time as other packagers. Because Heroku was especially vulnerable, the PostgreSQL Core Team worked with them both to secure their infrastructure and to use their deployment as a test-bed for the security patches, in order to verify that the security update did not break any application functionality. Heroku has a history both of working closely with community developers, and of testing experimental features in their PostgreSQL service."

I believe all the heroku hosted postgresql servers are externally accessible and there's no way to filter access by IP.

Of course hindsight is always 20:20, but perhaps it's a good idea for heroku to consider adding some basic (optional) firewall layer to allow customers to control who can connect to the hosted db?

Disclaimer: I'm not a heroku customer. I did however consider moving our pg's over to them a little while ago.


I would definitely pay Heroku a bit more to restrict access of a Heroku PG instance to a given EC2 security group.

I couldn't find a PG hosting provider with more useful features than Heroku - with that, it would be a killer option.


Well, I'll tell you why it is not implemented that way. I hope the restriction can be lifted some day. I am a member of the Heroku staff related to the matters at hand.

The problem is the sheer number of Heroku Runtime machines which are located in a smattering of IP space, and rapidly and accurately propagating the firewall rules required for tight network access control as applications churn around in there...even then, there have been some reports of voluminous firewall rules causing obscure problems. Of course: the world is an obscure place and yet we can deal with it in time. Such a thing could be hardened I'm sure, but the amount of bookkeeping required is a bit terrifying, and experience suggests that will not go entirely smoothly or be easy to find bugs in. At the time this was reasoned out (maybe about two years ago?), it wasn't even widely known that Heroku offered any data storage service of significance. Early days.

So, the simple approach is to enable access from the entire Heroku Runtime layer. But who can put applications there? Anybody on the Internet. That's why the 'ingress' feature to poke a temporary hole in the database firewall was dropped as too marginal given the fairly severe inconvenience of it all -- it had the feel of a weird Heroku-ism, especially in light of the lack of attacks using unauthenticated clients on Postgres, until this date. In addition, what about all the other addons? There would need to be an API, and because of the nature of what it is doing (poking holes before beginning, say, TLS negotiation which requires even more round trips and is slow enough as it is) it would need to be stable, fast, accurate, and all that other good stuff, lest all addons effectively be rendered offline all at once.

Other more application-level approaches are possible (like tunneling all connections to a local unix socket, or something), but that's but a little strange, because it requires code injection of strange stuff into the running container, makes your URLs look funny, and so on. This model has been experimented with by some of my colleagues and staff of other, similar firms, and definitely has its attractive sides. Nevertheless, one of the general guidelines in our implementation choices is to not be too weird for someone moving on or off the service. These approaches are also not in and of themselves immune to DoS or security problems...they need careful auditing. Maybe another look is indicated. And again: what about every other addon? What about your local computer? Are you going to install some weird agent, open source or no, from every such addon?

My personal favorite pie in the sky option right now would be cordon off a slice of contiguous, publicly addressable (but not publicly accessible) IP space so that firewall rules could remain compact and slower changing, and also involve your local computer: imagine being able to VPN to two or three such networks simultaneously, because their addressing does not alias at all. But this is still in the realm of fantasy, and and probably would require looking at IPv6 to be able to segment the address space in a sane manner, which compounds another layer of stuff that can be buggy, even in such mundane details as address parsing.

So....there's some rambling, but I thought it perhaps useful to talk about some of the challenges here to motivate the discussion.


Thanks for the insights - much appreciated :-)


you can do the IP space thing with IPv6.


I think you petered at at the end of my wall of text ;)

But seriously, I'm glad that I was not totally off base in letting my mind drift to IPv6. Thanks.


It certainly seems feasible with the use of VPC and Security Groups.

At the very least I think they should offer an option that only you to restrict access so only your dynos have access to the postgres port.


I'd love to use it with any EC2 installation (EngineYard, stock EC2 etc).

If only they could limit access to EC2 security groups, it would be amazing.


That would be great.

Security Groups work across accounts, so Heroku (or whoever) could let you provide your account ID and Security Group name, then authorise access from this group.


Simple ec2 security groups is not enough because we also run an arbitrary code execution service (dynos)


yes, when I did my little research for hosted PG, heroku was pretty much the only viable option. That said, I did come across some difficulties running `rake spec` against the heroku hosted db (since you can't drop the database, only individual tables). This was giving me some (unrelated) headache.

Another thing I was really hoping for but couldn't find with heroku, was being able to do a point-in-time-restores via the heroku web/cli interface. This would be a seriously nice feature if something like that was available...


Product manager of Heroku Postgres here; if you specifically need this functionality around point in time restores you should reach out to us. Would love to hear more around the use cases behind it.


Thanks Craig. I might do that, but I think we're too small fish for any kind of a bespoke solution. Given that you guys came up with WAL-E, I was secretly hoping this was somehow baked-into some magical heroku interface already...


Make sure to reach out, magic may already exist.


I'm just glad the "I want magical stuff for free" kind of customer isn't restricted to any particular service or product.


I didn't say I want it for free. I'm just not a big-enough customer with deep enough pockets to have some customized solution built especially for me by heroku.

It doesn't mean that other people like me wouldn't be interested in something like this if it existed.


I would love to have Heroku add this web interface to the default PostgreSQL package: http://teampostgresql.herokuapp.com


More information about the security release can also be found in the special FAQ:

http://www.postgresql.org/support/security/faq/2013-04-04/


I'm a little confused about their release strategy. Perhaps someone can explain it to me.

They took their repositories private to secretly develop the bug fix. Then they released the fixed versions along with what seem to be enough details to trigger the bug for anyone who hasn't patched.

Sure the patch contains the same information in source form, but if they'd gone light on details while saying "seriously, go get this", there'd probably be fewer curious vandals trying to delete your database while you're reading HN.


I like to know exactly why I'm updating my database before I apply any patches. I doubt they could have been sufficiently light on the details, while still giving admins enough information to decide whether or not to upgrade.

"Apply this patch, don't worry what it does, just do it" is not something I want to hear from my database vendor :-)

Had the repos remained public, this detailed information would have been available to a lot more people, a lot sooner. Temporarily "going dark" to work on the patch seems like an acceptable compromise.


Not really, any big project has people going over every commit to see what changed. Any commits that are associated with a security release are particularly scrutinized. Within an hour of release there would already be people talking about the vulnerability, as well as example code for triggering it. Full disclosure is better, because even if people can't do an upgrade, they can choose to block ports at firewalls, turn off databases, and other mitigation methods immediately, as they are allowed to.

Hiding the information just weakens the defender position, not the attacker position. Secrecy in implementation is not security, it is just stupidity.


If they were closed source, they could probably get away with it, buying hours to days of time before someone reverse-engineers the attack.

They are open source, though, and many people who use it build from source. It is very very easy for complete amateurs to look through the source and see what changed in a manner of minutes.


While this comment is wrong, it does not deserve the downvotes that it's gotten. The guy asked a reasonable question, now let's be polite and answer it (as this comment's sibling indeed do). Downvotes should be reserved for comments that undermine productive discussion.


Take a lesson from open source, security through obscurity does not work. Better to be fully transparent and honest about the flaws and their fixes, and get the word out there so that people update their boxes quickly.


"This update fixes a high-exposure security vulnerability in versions 9.0 and later. All users of the affected versions are strongly urged to apply the update immediately."


[deleted]


This is terrible advice. Installing another postgres binary from a different source is likely going to cause you headaches.

And it's not necessary - vendors such as Red Hat and Debian are fast with security updates. Given that Red Hat helped them apply for the CVE the non-version-9 fixes should be out PDQ. You can keep an eye on it here: https://access.redhat.com/security/cve/


It worked just fine but seeing as I was wrong I have deleted the post.


Debian has it. I assume Ubuntu does as well... at least for the 8.4 version.

---

PostgreSQL 8.4.17 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real (Debian 4.4.5-8) 4.4.5, 64-bit


We have two Ubuntu 11.10 servers (decommissioning soon!) that previously contained 9.1.3 and have just had this 9.1.9 version updated from the repos.


Ubuntu 12.04 already has the new package version of 9.1. Your mirror may be outdated (we use the Portuguese default).


It worked for me on Ubunutu - maybe it takes a bit of time to get around to all of the mirrors.



It's testament to Canonical that Ubuntu 8.04 LTS still gets security patches backported to 8.3. If you (still) have servers running Hardy, it's 'apt-get upgrade' time: http://www.ubuntu.com/usn/usn-1789-1/


I'm not an expert so I'll ask here:

Is there an attack vector if you run PostgreSQL locally, no untrusted users are able to create connection strings and do not allow remote access?

It seems to be no but I prefer to be sure ;)


as far as I understand from the FAQ page, as long as connections to your PG database are blocked from external sources, you should be safe. Seems like a good idea to upgrade as soon as possible anyway though.

  > How can users protect themselves?
  > * Download the update release and update all of your servers as soon as possible.
  > * Ensure that PostgreSQL is not open to connections from untrusted networks.
  > * Audit your database users to be certain that all logins require proper credentials, and that the only logins which exist are legitimate and in current use.
EDIT: added a quote from the FAQ for clarity.


It doesn't specify whether pg_hba.conf is sufficient to protect against this, anybody have any word on that?


No, pg_hba.conf is not sufficient. I'll work on adding that to the FAQ.


I'm wondering this as well.


I would like an answer to this, quickly, also. I am scheduled to leave on a 6 hour hike in one hour - I have time to update if I have to. I only permit localhost connections.


Do it, it involves about 3 seconds of downtime per server to run sudo aptitide update and sudo aptitude upgrade . Other package management tools should be equally speedy. If you've rolled your own postgresql binary, get the new sources, re-build, stop services, replace binaries, start services (the old drill...)

The upgrade (on Debian based systems at least) is for libpq5, postgresql-9.1 and postgresql-client-9.1 . You shouldn't need to do anything else unless, for some strange reason, you actually do have a database starting with "-".

Also, I wish you a fine hike!

(probably commented too late, but I hope you had a nice hike, nonetheless)


From the FAQ:

Who is most at risk:

"Any system that allows unrestricted access to the PostgreSQL network port, such as users running PostgreSQL on a public cloud, is especially vulnerable. Users whose servers are only accessible on protected internal networks, or who have effective firewalling or other network access restrictions, are less vulnerable."

So looks like it's low risk but they're not willing to say no risk.


The reason they are not willing to say no risk is presumably that if you don't upgrade, then any other security vulnerability that allows an attacker to trigger a network connection with a suitable payload to port 5432 (or any other ports you may have Postgres on) on your hosts could still be harmful.

That means anything that gives local shell as any user that run normal tools, but potentially also a lot of other things.

E.g. any software that can be tricked to try to connect to a local address/port pair and send a suitable string.

That dramatically escalates any minor little hole that might otherwise not be a risk for you.

(That's a reminder to always verify before trusting any hostname/IP a user passes you that it's not a local address or address you have privileged access to, and to also consider internally firewalling connections between your various hosts down to just what you need)


A firewall might be adequate; configuring postgres itself seems not to be. The vulnerable code is invoked before client authentication, so anyone who can make a tcp connection to the postmaster process can exploit the attack, even if their source IP would otherwise get them unconditionally bounced.


you don't need a firewall as long as you don't turn on remote connections. that's the listen_address option.


To be sure, you should upgrade. Why do you not want to upgrade?


[not gp post, but this is a normal situation]

Because some folks need to go through a long, drawn out process to upgrade internal software and might already have some place in the pipeline to put the upgrade so it will get tested with everything else.

If its a security issue for your current installation, you go now and hope, but if it can wait for a release then you do that.


It seems to me that even large, overly bureaucratic corporations need a process in place that allows deploying critical security updates in a reasonably expedient manner. Is this uncommon?


Well, it's only a critical security update if it affects your installation. If there's a way to protect without a quick rush software upgrade, you may as well wait until the next scheduled software upgrade.


Exactly. We were prepared for a "quick rush" but when the details were released we did a combination of nmap to look for any accidentally exposed Postgres installs + adding explicit firewall rules to reject all traffic on the related ports (all of them should be rejected by default rules anyway, but we wanted to ensure none of the ports had been accidentally opened up) "just in case", and at that point we felt we could take a slower, more measured roll out of the upgrade. In 5 minutes we were "in the clear" and could breathe a sigh of relief.

It's still important to apply the udates, as I wrote elsewhere, because it's one more way someone can mess you up badly if they have managed to penetrate other parts of our systems, but upgrades no matter how seemingly safe can also cause problems (fore example I test upgraded one VM that refused to come back up again afterwards - not Postgres' fault, but a config change someone had clearly not tested properly).

But in this case, knowing we have blocked the main attack vector means we can take the time to take a copy of each database VM we run, test apply the upgrade and verify everything works correctly before applying it live, instead of rushing it out.


Thanks all for the information.

The reason I wanted to not upgrade directly is that access was hard at the moment of posting. Upgrading should never wait too long but this was (luckily) defcon 1 for us.


It's more that bootvis has no immediate slack time to deal with issues if the upgrade isn't smooth.


Ubuntu repos already seem to have the fix


Sorta surprised I don't see people complaining that this release contained other changes besides the security fix.

Lots of folks complained about that (unintended) ActiveRecord change in Rails during the last security update.


So if I have no databases that start with "-", I'm not vulnerable? Didn't quite understand what they meant by that.


Just from the quote cited by octo_t I would read that you are still vulnerable: A malicious database user could craft a _connection string_ which contains a database name starting with -. There's no hint that the database has to exist on your server for this to work, so I would read it could be a complete bogus request and still damage your files.


/* Is this all it takes? */

PQconnectdb("host=127.0.0.1 dbname=-exploit user=postgres password=postgres port=5432");


Yes, but that wouldn't do anything harmful. Something like dbname="-r /var/lib/postgresql/9.1/main/pg_clog/0000" would be required to cause any harm. I have not tested it in practice but that should cause the server to overwrite the file with log output.

EDIT: They are not overwritten but just appended to.


Nope. Looking at the release notes:

> Fix insecure parsing of server command-line switches (Mitsumasa Kondo, Kyotaro Horiguchi)

So I assume command-line switch parsing is somehow involved in parsing the connection string (probably because the same connection strings can be used from API and from CLI?), I guess a database name with a leading `-` can be interpreted as a switch and execute corrupting commands.

edit: according to the dedicated FAQ:

> The vulnerability allows users to use a command-line switch for a PostgreSQL connection intended for single-user recovery mode while PostgreSQL is running in normal, multiuser mode. This can be used to harm the server.


From the FAQ originally shared by edwinvlieg, you are still vulnerable:

The vulnerability allows users to use a command-line switch for a PostgreSQL connection intended for single-user recovery mode while PostgreSQL is running in normal, multiuser mode. This can be used to harm the server.


Should one update the local postgres version? Any write ups on how exactly to go about it?


Yes. http://www.postgresql.org/docs/9.2/static/upgrading.html , second paragraph:

PostgreSQL major versions are represented by the first two digit groups of the version number, e.g., 8.4. PostgreSQL minor versions are represented by the third group of version digits, e.g., 8.4.2 is the second minor release of 8.4. Minor releases never change the internal storage format and are always compatible with earlier and later minor releases of the same major version number, e.g., 8.4.2 is compatible with 8.4, 8.4.1 and 8.4.6. To update between compatible versions, you simply replace the executables while the server is down and restart the server. The data directory remains unchanged — minor upgrades are that simple.


This is the main vulnerability I presume

> A connection request containing a database name that begins with "-" may be crafted to damage or destroy files within a server's data directory

I just. No words.


Getting this stuff right is hard. Don't be a hater.

Just because the attack vector looks simple doesn't mean the bug was obvious.


Absolutely. Remember the MySQL authentication bypass vulnerability¹, where a blank password would succeed to authenticate 1/255th of the time? This reminds me of that.

1: http://thehackernews.com/2012/06/cve-2012-2122-serious-mysql...


I agree. Postgres is one of the most well thought out DBs I've ever used. They are slow to add features but when they do, they are done right with lot's of attention to detail. Everyone makes mistakes.


If you have never released software with a security hole, it is because you have never released software.

Very smart and conscientious people can mess things up here. Shaming serves no purpose.


The FAQ makes no mention of this massive data loss possibility which seems a bit odd…


The exploit does not actually destroy the files. It allows appending data to the files making the server crash. The file could be recovered by simply removing the junk data added to it.

From the FAQ:

> Persistent Denial of Service: an unauthenticated attacker may use this vulnerability to cause PostgreSQL error messages to be appended to targeted files in the PostgreSQL data directory on the server. Files corrupted in this way may cause the database server to crash, and to refuse to restart. The database server can be fixed either by editing the files and removing the garbage text, or restoring from backup.


You actually expect most users to figure that out? Especially if corrupted with stuff that looks vaguely like regular postgres data...


Doesn't matter if they can figure it out or not. The data isn't lost, which is the salient point there. There is the potential for datafile corruption, but that is not the same as data corruption or data loss. All your bits are still there and can be recovered by someone with the right expertise.


I am not sure, but a couple of things which would make it possible are.

1) PostgreSQL does generally report which file was corrupted.

2) The PostgreSQL log output rarely looks similar to regular data, so it should be obvious to anyone looking what is wrong if they do look at the contents of the file.

And most importantly they can always contact a PostgreSQL expert who could repair it.


defense in depth though, users shouldn't be able to craft connection requests to begin with


Then how would a user use the db? Not every use of a database is behind a web application.


Well, once could use something like spiped[1], which would add a very large roadblock to any attacker.

[1]: http://www.tarsnap.com/spiped.html


This is perfectly compatible with Postgres.

However this does not prevent any of your employees or other users of systems with access to use spipped from committing this attack. You still need a client somewhere and the server is still vulnerable.

Allowing remote connections from any IP to your database, like heroku apparently does, sounds kind of crazy to me. I can't believe they do it. But limiting and encrypting that access just limits, and does not eliminate your vulnerability to this bug.

---

Just to be really clear: Say your corporate blog stores it’s data in your main Postgres instance. As blogging engines tend to, it has a bug, and hackers succeed in using that to get access to your blog’s server. Even if you are using spiped to connect the 2 boxes they still have the ability to mess with your main database, on some other, probably much better secured, box. This bug is ugly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: