I really like OpenAI's mission and respect the people who work there, but last time I read their blog, reddit AMA and other posts here it seems that they don't have any concrete goals
At what point can you say that has been achieved? A concrete goal to me would require a way of knowing it's been completed.
> For a research organization I'd question if you'd want something more specific?
Definitely. The goal at the moment sounds more like one of a funder, although again I'd hope for something more solid (achieve human level performance on problems X, y z for example).
I'm not really sure that after the fact is the right time to work out what on earth your goal actually means.
To be more constructive, I'd recommend a short term goal on the lines of
Identify 5 areas which could see significant improvement to the lives of people within 10 years with the concentrated effort of openai.
This kind of mid goal (work out a plan) feels fairly sensible, and should lead to research into how tech interacts with people (or doesn't yet) and how open or underfunded areas may be. Maybe you'll find a big bottleneck that would help but isn't that commercially viable to investigate.
Now with some identified areas to work in you can make more specific plans or goals for those. What's the first step?
Coincidentally, I've been reading Paul Christiano's medium posts on the AI control problem [1]. It's great to see AI Safety research folks join the OpenAI team.
It looks like the first guy's paper is partially based on an idea well-known in the competitive programming community (makes sense given the 3 IOI medals).
Disappointing that out of 8 hires there's only a single woman, and of course she was hired for a role that's less deep-researchy than the other new guys.
To test out increasingly 'general' and advanced A.I, you need a sandbox/playground environment...
You can use a real-world environment with a robot (noisy, more latency, computationally restrictive , more headaches, and extra development time)
Or you can utilize a 'virtual' world.
Minecraft is being used as Microsoft's sandbox/playground.
Video games are a good sandbox/playground : physics engine+sim+a.i hook.
You can generalize this into an observation->action loop platform.
There are many open source platforms out there for this.
Take Box2d as an example which OpenAI put a custom wrapper around.. It's a 2d Physics simulation engine. 2-Dimensions = less complexity than 3-D. You have your 'physical environment' and then you hook that into your A.I .. observation/command loop.
There are public Sandboxes/playgrounds and there are private ones. Many of these engines aren't hard to tie into. I'm sure this was one of the major things to resolve at OpenAI as it is for anybody in the space : make a 'playground environment for testing your A.I'...
I'm also sure that OpenAI has a more advanced private gym with higher fidelity links to the A.I like everyone in the space does. Members only ;)
So, OpenAI has packaged a bunch of open source engines/etc into an approachable platform to make development easier. Also, OpenAI hopes to get people to upload their results and have them detail how they achieved their results... Interesting.
A.I development is at the point IMO where you don't need a name or accolades to contribute. You don't need a PHD. You don't need to be an expert in the field. You don't need to be some award winning coder. In some ways, that can harm you w.r.t to having an ingrained view on how to approach problems in a space that is begging for new paradigms.
Dedicate a solid month and you can have make a virtual A.I gym setup and be off and running.
If you have done any serious code development, you can easily break into this space.
The thing that will be the most time consuming will be wrestling with these packages, dependencies, understanding them, and figuring out how to hook/in out of them.
It seems openAI has tried to reduce this pain with the release of openAI gym.
However, you'll find, if you get into any serious A.I dev, you're going to want to start cutting through people's wrappers and add-on layers that add latency, increase complexity, and keep your A.I away from the heart of the sim.
You'll want your own custom hooks....
You'll want the code to be as low level as possible.
It is. I tried it out but it doesn't have the fidelity w.r.t I/O and 'hooks' I was looking for. Then again, I'm working beyond reinforcement learning. More advanced A.I necessitates more advanced gym.
I highlighted box2d as its an opensource physics/sim engine.
The sim/physics engine is 90% of the 'gym'. I'm sure if there was money involved, the sims/hooks/etc that are being sourced and packaged around could get a lot more advanced more quickly. i.e :
https://github.com/erincatto/Box2D/issues
So far I've seen lots of press releases and credential waving, but no actual development plan or progress. Until they start publishing papers a la Google, I'm putting this in the vaporware basket.
Yep, I actually support this viewpoint. Judge us by our work, not by what we say about ourselves or whom we hire.
Fortunately, we'll have our first batch of papers coming out soon. (We also released OpenAI Gym a few weeks ago, but that's supporting infrastructure rather than novel research.) Hopefully we'll be able to earn our way out of the basket afterwards :).
True, but the people involved have much better things to do than join a vaporware research group. The people on this list alone (not to mention the heavily credentialed researchers from previous announcements) must have no problem finding meaningful work in this space, work that would produce results quite quickly. I'm inclined to believe there is something here and OpenAI is just ramping up.
The public beta of OpenAI Gym, a toolkit for developing and comparing reinforcement learning algorithms, was released last month: https://openai.com/blog/openai-gym-beta/
This is kind of par for the course in non-profit AI research. OpenAI's spiritual parent MIRI also has something of a problem with lots of press releases and credential waving with few impactful papers or research.
That sounds a little harsh and I don't really mean in that way. Commercial AI/ML projects, since they have a clear goal they can measure against (play Go, drive a car, get better search results) tend to make meaningful progress that we can see, whereas the more philosophical and research-oriented non-profits don't have a yardstick like that, and probably won't until we have an inkling on how to actually make "strong AI". So this organization will probably seem like vaporware for some time, but hopefully it will prove its worth in the long run.
a) Not sure if MIRI is a spiritual parent of OpenAI. It's a different approach to the problem, and I'm not sure MIRI would spawn OpenAI if they had the resources.
b) Also, unfair to MIRI. They are not only researching the area but also creating it. A lot of early work want into socializing AI risk as an idea, and in that sense pretty much every future effort in this area owes to them. I've given noticeable dollars to MIRI and pretty satisfied with ROI (of course always room for improvement), would do it again.
In my opinion it's impossible for them to do anything of substance - unless they turn into a lobbying body to drive legislative regulation of AI/AGI development.
Even with concrete plans/objectives ML/DL/AI projects rarely live up to their hype. IMO An AGI project that isn't focused around observable learning models (eg, not around "consciousness" etc...) isn't going to make much progress.