Technically (or, at least, historically), they should have used the indefinite pronoun "one" i.e. "...because their defense systems seem overly sensitive to one's email address". But I imagine that would've got more comments than using you/your.
> Hacker news is there to promote ycombinator companies. So long as you know and avoid this it's surprisingly high quality. But that's there to lend more ligitimacy to ycombinator.
Everything has a cost. For the web, that's typically monetary or your data and attention to advertisers. I think you're right that the cost of Hacker News is that my participation is lending some (tiny incremental) legitimacy to Y Combinator. It's also costing some tiny amount of my attention, in the sense that I may not have heard of Y Combinator if it weren't for Hacker News. For me personally, that is absolutely fine – but I'm glad you made it explicit so that it's a conscious choice.
[Edit: Of course it costs an absolutely vast amount of my attention :-) but I mean only a teeny tiny fraction of that is "payment" in the sense of noticing that Y Combinator exists.]
Ludicrously unnecessary nitpick for "Remove all the brown pieces of candy from the glass bowl":
> Gemini 2.5 Flash - 18 attempts - No matter what we tried, Gemini 2.5 Flash always seemed to just generate an entirely new assortment of candies rather than just removing the brown ones.
The way I read the prompt, it demands that the candies should change arrangement. You didn't say "change the brown candies to a different color", you said "remove them". You can infer from the few brown ones that you can see that there are even more underneath - surely if you removed them all (even just by magically disappearing them) then the others would tumble down into a new location? The level of the candies is lower than before you started, which is what you'd expect if you remove some. Maybe it's just coincidence, but maybe this really was its reasoning. (It did unnecessarily remove the red candy from the hand though.)
I don't think any of the "passes" did as well as this, including Gemini 3.0 Pro Image. Qwen-Image-Edit did at least literally remove one of the three visible brown candies, but just recolored the other two.
That is a great point! Since we are moving towards better "world models" in terms of these multimodal models, you could reasonably argue that if the directive was to physically remove the candy that in the process of doing so, gravity/physics could affect the positioning of other objects.
You will note that the Minimum Passing Criteria allows for a color change in order to pass the prompt but with the rapid improvements in generative models, I may revise this test to be stricter, only allowing "Removal" to be considered as pass as opposed to a simple color swap.
One of the really nice things Julia does is make broadcasting explicit. The way you would write this in Julia is
Z = [1,2,3]
W = Z .+ Z' # note the . before the + that makes this a broadcasted
This has 2 big advantages. Firstly, it means that users get errors when the shapes of things aren't what they expected. A DimmensionMismatch error is a lot easier to debug than a silently wrong result. Secondly, it means that julia can use `exp(M)` etc to be a matrix exponential, while the element-wise exponential is `exp.(M)`. This allows a lot of code to naturally work generically over both arrays and scalars (e.g. exp of a complex number will work correctly if written as a 2x2 matrix)
* it does not support true 1d arrays; you have to artificially choose them to be row or column vectors.
Ironically, the snippet in the article shows that MATLAB has forced them into this awkward mindset; as soon as they get a 1d vector they feel the need to artificially make it into a 2d column. (BTW (Y @ X)[:,np.newaxis] would be more idiomatic for that than Y @ X.reshape(3, 1) but I acknowledge it's not exactly compact.)
They cleverly chose column concatenation as the last operation, hardly the most common matrix operation, to make it seem like it's very natural to want to choose row or column vectors. In my experience, writing matrix maths in numpy is much easier thanks to not having to make this arbitrary distinction. "It's this 1D array a row or a column?" is just over less thing to worry about in numpy. And I learned MATLAB first, do I don't think I'm saying that just because it's what I'm used to.
* it does not support true 1d arrays; you have to artificially choose them to be row or column vectors.
I despise Matlab, but I don't think this is a valid criticism at all. It simply isn't possible to do serious math with vectors that are ambiguously column vs. row, and this is in fact a constant annoyance with NumPy that one has to solve by checking the docs and/or running test lines on a REPL or in a debugger. The fact that you have developed arcane invocations of "[:,np.newaxis]" and regular .reshape calls I think is a clear indication that the NumPy approach is basically bad in this domain.
You do actually need to make a decision on how to handle 0 or 1-dimensional vectors, and I do not think that NumPy (or PyTorch, or TensorFlow, or any Python lib I've encountered) is particularly consistent about this, unless you ingrain certain habits to always call e.g. .ravel or .flatten or [:, :, None] arcana, followed by subsequent .reshape calls to avoid these issues. As much as I hated Matlab, this shaping issue was not one I ran into as immediately as I did with NumPy and Python Tensor libs.
EDIT: This is also a constant issue working with scikit-learn, and if you regularly read through the source there, you see why. And, frankly, if you have gone through proper math texts, they are all extremely clear about column vs row vectors and notation too, and all make it clear whether column vs. row vector is the default notation, and use superscript transpose accordingly. It's not that you can't figure it out from context, it is that having to figure it out and check seriously damages fluent reading and wastes a huge amount of time and mental resources, and terrible shaping documentation and consistency is a major sore point for almost all popular Python tensor and array libraries.
> It simply isn't possible to do serious math with vectors that are ambiguously column vs. row ... if you have gone through proper math texts
(There is unhelpful subtext here that I can't possibly have done serious math, but putting that aside...) On the contrary, most actual linear algebra is easier when you have real 1D arrays. Compare an inner product form in Matlab:
x' * A * y
vs numpy:
x @ A @ y
OK, that saving of one character isn't life changing, but the point is that you don't need to form row and column vectors first (x[None,:] @ A @ y[:,None] - which BTW would give you a 1x1 matrix rather than the 0D scalar you actually want). You can just shed that extra layer of complexity from your mind (and your formulae). It's actually Matlab where you have to worry more - what if x and y were passed in as row vectors? They probably won't be but it's a non-issue in numpy.
> math texts ... are all extremely clear about column vs row vectors and notation too, and all make it clear whether column vs. row vector is the default notation, and use superscript transpose accordingly.
That's because they use the blunt tool of matrix multiplication for composing their tensors. If they had an equivalent of the @ operator then there would be no need, as in the above formula. (It does mean that, conversely, numpy needs a special notation for the outer product, whereas if you only ever use matrix multiplication and column vectors then you can do x * y', but I don't think that's a big deal.)
> This is also a constant issue working with scikit-learn, and if you regularly read through the source there, you see why.
I don't often use scikit-learn but I tried to look for 1D/2D agreement issues in the source as you suggested. I found a couple, and maybe they weren't representative, but they were for functions that could operate on a single 1D vector or could be passed as a 2D numpy array but, philosophically, with a meaning more like "list of vectors to operate on in parallel" rather than an actual matrix. So if you only care about 1d arrays then you can just pass it in (there's a np.newaxis in the implementation, but you as the user don't need to care). If you do want to take advantage of passing multiple vectors then, yes, you would need to care about whether those are treated column-wise or row-wise but that's no different from having to check the same thing in Matlab.
Notably, this fuss is precisely not because you're doing "real linear algebra" - again, those formulae are (usually) easiest with real 1D arrays. It when you want to do software-ish things, like vectorise operations as part of a library function, that you might start to worry about axes.
> unless you ingrain certain habits to always call e.g. .ravel or .flatten or [:, :, None] arcana
You shouldn't have to call .ravel or .flatten if you want a 1D array - you should already have one! Unless you needlessly went to the extra effort of turning it into a 2D row/column vector. (Or unless you want to flatten an actual multidimensional array to 1D, which does happen; but that's the same as doing A(:) in Matlab.)
Writing foo[:, None] vs foo[None, :] is no different from deciding whether to make a column or row vector (respectively) in MATLAB. I will admit it's a bit harder to remember - I can never remember which index is which (but I also couldn't remember without checking back when I used Matlab either). But the numpy notation is just a special case of a more general and flexible indexing system (e.g. it works for higher dimensions too). Plus, as I've said, you should rarely need it in practice.
x @ A @ y is supposed to be a dot product and you are saying this is better notation?? Row and column vectors have actual meaning. Sorry but I am not reading the rest of whatever you wrote after that. Not the GP but you should just consider the unhelpful subtext to be true.
Good advice but there's a bit of a difference between a device (or even several) you can knock together yourself and throw out of the side of a (surface) boat vs access to a whole undersea cable which (I have just learned) is what you need for DAS. Plus, if you can do it yourself with virtually no resources, it's a safe bet that any potential adversaries are already doing something many orders of magnitude greater.
Supposedly new submarines are so quiet that they can't be detected anyway. I'm sure there's a large element of exaggerating abilities here, but there's definitely an element of truth: in 2009, two submarines carrying nuclear weapons (not just nuclear powered) collided, presumably because they couldn't detect each other. If a nuclear submarine cannot detect another nuclear submarine right next to it then it's unlikely your $5 hydrophone will detect one at a distance.
Of course, none of this means that the military will be rational enough not to be annoyed with you.
True, but the original comment that we're talking about here (by sundarurfriend) just mentioned an LLM's output in passing as part of their (presumably) human-written comment. Nothing you've linked to prohibits that.
That is still true and still irrelevant here. The comment we're talking about was not written by a bot with a disclaimer at the start. They just asked about its output. They didn't even quote its output - they paraphrased it and added their own commentary!
I know HN rules prohibit saying "did you even read it?" but you surely can't have read the comment to have come to this view, or at least significantly misread it. Have another look.
Most of all, HN guidelines are about encouraging thoughtful discussion. sundarurfriend's comment asked a genuinely interesting question and inspired interesting discussion. This subthread of "but AI!" did not.
Except in that case they were summarizing it, which I read as closer to “I found this on Stack Overflow but don’t know if it’s right”. I think that’s less offensive than having the post be LLM output or, especially, pretending to be authoritative.
> There’s a huge difference between functions that might mutate a dictionary you pass in to them and functions that definitely won’t.
Maybe I misunderstood, but it sounds to me like you're hoping for the following code to work:
def will_not_modify_arg(x: frozendict) -> Result:
...
foo = {"a": 1, "b": 2} # type of foo is dict
r = will_not_modify_arg(foo)
But this won't work (as in, type checkers will complain) because dict is not derived from frozendict (or vice-versa). You'd have to create a copy of the dict to pass it to the function. (Aside from presumably not being what you intended, you can already do that with regular dictionaries to guarantee the original won't change.)
Ah, I see. The last sentence in your previous comment makes more sense now ("Mapping is great, but ... you can violate it at run time"). A type checker would normally catch violations but I can still see a frozendict would be useful.
reply