Hacker Newsnew | past | comments | ask | show | jobs | submit | mncharity's commentslogin

Yesterday IMG tag history came up, prompting a memory lane wander. Reminding me that in 1992-ish, pre `www.foo` convention, I'd create DNS pairs, foo-www and foo-http. One for humans, and one to sling sexps.

I remember seeing the CGI (serve url from a script) proposal posted, and thinking it was so bad (eg url 256-ish character limit) that no one would use it, so I didn't need to worry about it. Oops. "Oh, here's a spec. Don't see another one. We'll implement the spec." says everyone. And "no one is serving long urls, so our browser needn't support them". So no big query urls during that flexible early period where practices were gelling. Regret.


sexps?

> sexps?

Not the person you're responding to, but I think they mean sexps as in S-expressions [1]. These are used in all kinds of programming, and they have been used inside protocols for markup, as in the email protocol IMAP.

[1] https://en.wikipedia.org/wiki/S-expression



There's an old idea of adaptive media. Imagine a video drama that's composed of a graph of clips, like an old "choose your own adventure" book ("Do you X? If yes, goto page 45"). With gaze tracking, one can "hmm, the viewer is more focused on character A than B... so we'll give clips and subplots with more A".

Now, when reading, the eye moves in little jumps - saccades. They last 10's of ms, the eye is blind during them, and with high-quality tracking, you know quite early just where that foveal peephole is going to land. So handwave a budget of a few ms for trajectory analysis, a few for 200 Hz rendering latency, and you still have 10-ish ms to play with. At 20k tok/s, that's 200 tok.

So perhaps one might JIT the next sentence, or the topic of the next paragraph, or the entire nature of the document, based on the user's attention. Imagine a universal document - you start reading, and you find the document is about, whatever you wanted it to be about?


Generative TikTok for words

> The right approach would have been to select a color appearance model (CIECAM02 is the standard), convert all our colors to this coordinate system, do the mixing in this coordinate system and then convert back to RGB. That being said, I did not want to deal with all the extra complexity that would have come along with this. Instead, I opted for a much simpler approach.

Python's nice `colour` package supports several color appearance models.[1]

[1] https://colour.readthedocs.io/en/master/colour.appearance.ht...


But I'm glad for the ground-truthy approach taken. I suggest a pattern, of interesting data being unavailable, because it doesn't align with incentives around science or commerce. Often it exists, just sitting on someone's disk, because they think no one is likely to care.

> In his later years, Tinney grew philosophical about the future of illustration as a profession, noting that stock image databases had changed the economics of the field. But he remained upbeat about the value of artistic talent, comparing it in that 2006 interview to the skill of public speaking: “It’s a nice talent to have, but it isn’t easy to find someone who’ll pay you just to do it. You need to combine that basic talent with another skill to really have a marketable service.”

Perhaps something to ponder as AI stirs up what constitutes a marketable service.


"To whom have I given blue from my sunbeam?!?" might be another fun question. I explored it as a potential interactive, to see, geographically, where your direct sunlight is donating sky blue-ification. Especially around sunset - IIRC, think a 100 km neon tube, at 7ish km altitude, near-end 150 km up range, with a 15-ish km wide ground footprint with 3/4-ish of the ground-impinging light, and the rest of a 100 km wide path with the 1/4-ish.

Perhaps "transparent with a blue tint"?

I’ll allow “transparent with an occasional blue tint under the current conditions of its nearest star”.

Does that confuse sales staff when shopping for clothes? ... :) The general observation being that sometimes educational descriptions of things get hedges which wouldn't usually be applied in everyday life. Yes, the nice red shirt will look black under some lighting, like some meters underwater... but it usually isn't mentioned. Yet for example, colors of unfamiliar objects in education content can get an "appears" hedge -"it appears white", rather than the more usual, simpler concept of "if its light looks white, it's 'white'".

As confusion elsewhere on this page illustrates, one also needs to clarify absorption. "It's just blue" sky and "it's just blue" stained-glass have quite different behavior. Both side scatter some blue, but while one mostly transmits the rest, the other mostly absorbs the rest, for very different experiences peering through it.

So perhaps "clear with a blue tint"?


IIUC, saturation is a (not uncommon) distractor here. As you get the same observation when desaturated by a neutral filter. Even on the "ground" with low air mass (Sun vertical, at altitude, etc).

Does anyone have experience of early-childhood "Why?"-phase meets speech-enabled LLMs?

Startup wise, there's old work on conversational agents for toddlers, language acquisition, etc. But pre‑literate developmental pedagogy, patient, adaptive, endlessly repetitive, responsive, fun... seems a potential fit for LLMs, and not much explored? Explain It Like I'm 2-4. Hmm, there's a 3-12 "Curio" Grok plushie.


I saw a Microsoft talk decades back, that was a dispirited "the people of India could be buying educational materials and... but no, all the money is in ringtones". For some kinds of business perspective, ok I guess. But for others, and for civilizational change, what's going on in the tail can matter a lot. Does China become a US engineering/science peer in early 21st C absent an internet/WWW?

Well, the thing is that the educational materials are largely free. That's why the people of India don't need to buy them.

Isn't that a better world than one where the ringtones were free?


Ah, perhaps I should have said something like "educational materials, and apps, and other useful things" (disapproving judgement in the original).

> Well, the thing is that the educational materials are largely free.

A triumph and fruition of these last decades of massive effort. Now we just need to deal with their quality (with commercial as bad as free). AI may help, by reducing barriers to content creation - you might for example, now more easily author an intro astronomy textbook, one that doesn't reinforce top-30 common misconceptions, something the most used (US; commercial) texts still don't manage.


I’m pretty curious. What are those too 30 misconceptions in US commercial astronomy texts? Is there a list somewhere? Or can you name some?

Sigh. One impact of AI will hopefully be more readily available systemic survey papers. [1] might-or-not be a good place to start... but it's paywalled (by the National Science Teacher Association no less), and I don't quickly see preprints/scihub/etc. Here's an old unordered list for browsing[2], and a more recent one[3]. Trumper did a series of papers asking the same few questions of various populations, to give a feel for numbers - like half not knowing day-night cause. Most lists are on subsets of astronomy, and most info on frequency on short lists. So... it's a mess. As are textbook reviews. Key phrases are "astronomy education research" and "misconceptions".

The one bit I explored was 'what color is the Sun (the ball)'. Asking first-tier astronomy graduate students became a hobby, as most get it wrong (except... for those who had taken a graduate seminar covering common misconceptions in astronomy education). So I libgen'ed the 10-ish most used intro astronomy textbooks in US according to some list. IIRC, it broke down roughly into thirds of: correct (white); didn't explicitly say but given surrounding photos, or "yellow" (as classification without clarification), there's no way students won't be misled; and explicitly incorrect (yellow). Hmm, bulk evaluation of textbooks against some criteria is another thing multi-modal models could help with.

(A musing aside re AI for systemic reviews. Creating one is a structured process. They have been very manpower intensive, so they aren't refreshed as often as is desired, nor consistently available. And at least in medicine ("X should be done in condition Y"), there's a potential for impact. I imagine close reads of papers isn't quite there yet. But maybe a human-AI hybrid process?)

[1] https://www.per-central.org/items/detail.cfm?ID=14009 [2] https://web.archive.org/web/20070209033543/http://www.physic... [3] appendix A of https://digitalcommons.library.umaine.edu/etd/2200/ [4] https://www.oranim.ac.il/sites/heb/SiteCollectionImages/pers...


s/systemic/systematic/g - oops.

> Systematic reviews are rigorous, transparent, and reproducible research studies that synthesize all existing evidence on a specific topic to answer a focused question and minimize bias. Unlike narrative reviews, they use predefined eligibility criteria, comprehensive searching, and critical appraisal to evaluate primary literature, often employing meta-analysis for quantitative results. [goog ai overview, edited]


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: