The following passage seems particularly noteworthy:
>"Sec. 3. AI Litigation Task Force. Within 30 days of the date of this order, the Attorney General shall establish an AI Litigation Task Force (Task Force) whose sole responsibility shall be to challenge State AI laws inconsistent with the policy set forth in section 2 of this order, including on grounds that such laws unconstitutionally regulate interstate commerce, are preempted by existing Federal regulations, or are otherwise unlawful in the Attorney General’s judgment, including, if appropriate, those laws identified pursuant to section 4 of this order. The Task Force shall consult from time to time with the Special Advisor for AI and Crypto, the Assistant to the President for Science and Technology, the Assistant to the President for Economic Policy, and the Assistant to the President and Counsel to the President regarding the emergence of specific State AI laws that warrant challenge."
It would seem logical to believe that there will be a number of AI-meets-law legal cases in the future, both in the U.S., it's States, and in the jurisdictions of foreign countries and their respective States/Districts/Regions...
I'm guessing (but not knowing) that the U.N. will have its own similar task force in the future -- as will other countries and their jurisdictional / law-making regions...
It will be interesting (at least from the perspective of a disinterested-in-outcome-but-interested-in-process legal observer) to see what cases (and also what laws/statutes) emerge in this area (Region Vs. Nation, Nation Vs. Region, Nation Vs. Nation, Region Vs. Region) in the future, and how they will be resolved...
(You know, for students of AI, students of Law, and students of The Future...)
A particular emphasis of our work on machine code verification is on using Coq as a place to do everything: modelling the machine, writing programs, assembling or compiling programs, and proving properties of programs.
Coq’s powerful notation feature makes it possible to write assembly programs, and higher-level language programs, inside Coq itself with no need for external tools."
Looks very promising! There is definitely something here!
If Moss survived outside of the International Space Station for 9 months, then perhaps Moss might be a good candidate to take to the surface of Mars on a future Mars mission...
>"Due to their sub-nanometer size, molecular components can efficiently exploit effects such as coherent resonant transport, Coulomb blockade, and strong inter-molecular capacitive coupling, phenomena essential for implementing nontrivial functions, including memories, memristors, switches, and oscillators.
Among such behaviors,
negative differential conductance (NDC)
, where
increasing voltage results in decreasing current
, is of particular importance. NDC enables high-speed switching, low-power logic, and oscillatory circuits, and has become a fundamental mechanism for emerging nanoelectronics, molecular computing and quantum sensing [4, 5, 6, 7]. While NDC has been well established in conventional semiconductor components (e.g., Gunn diodes, Esaki diodes) [8, 9], quantum dot arrays [10, 11, 12], two-dimensional heterostructures [13, 14, 15], molecular layers [16, 17, 18] and even single-molecule break junctions [19, 20], scalable integration of NDC into reproducible molecular circuits has been hampered for decades by the so-called wiring problem."
Looks like some very interesting and potentially promising research!
>"To assess the interpretability of our models, we isolate the small sparse circuits that our models use to perform each task using a novel pruning method. Since interpretable models should be easy to untangle, individual behaviors
should be implemented by compact standalone circuits.
Sparse circuits are defined as a set of nodes connected by edges."
...which could also be considered/viewed as Graphs...
(Then from earlier in the paper):
>"We train models to have more understandable circuits by constraining most of their weights to be zeros, so that each neuron only has a few connections. To recover fine-grained circuits underlying each of several hand-crafted tasks, we prune the models to isolate the part responsible for the task. These circuits
often contain neurons and residual channels that correspond to natural concepts, with a small number of straightforwardly interpretable connections
between them.
And (jumping around a bit more in the paper):
>"A major difficulty for interpreting transformers is that the activations and weights are not directly comprehensible; for example, neurons activate in unpredictable patterns that don’t correspond to human-understandable concepts. One hypothesized cause is superposition (Elhage et al., 2022b), the idea that dense models are an approximation to the computations of a much larger untangled sparse network."
A very interesting paper -- and a very interesting postulated potential relationship with superposition! (which also could be related to data compression... and if so, in turn, by relationship, potentially entropy as well...)
Most programming languages evolved from the limited selection of ASCII printable characters from the keys of early ASCII keyboards (which evolved from earlier typewriters and teletype machines)...
But these days we have Unicode -- a huge amount of potential symbols (and combinations of symbols!) that could be used in new programming languages (i.e., asterisk: '*', long used for multiplication could (finally!) be replaced with a true multiplication symbol: 'x' -- if an extended symbol space were used by the language...
Symbols could visually represent such things as loops, conditionals, variable declarations, etc., etc.
Pointers and pointer dereferencing (if the language supported it) -- could have their own symbols...
Would that be a good idea?
Well, if backwards ASCII compatibility is desired to run on old computers for whatever reason (in a future time when we need to revert back to older/simpler/more predictable systems because the AI's are out of control?) then maybe not...
But if we wanted graphical or quasi-graphical symbols to represent programming constructs more visually -- then maybe the idea has some merit...
Maybe a conversion program could be written that would convert between the new symbol set and ASCII, and maybe the programming language would be created to recognize both constructs...
And yes, I know that there is the Scratch programming language -- but I'm not talking about quite that visual! :-) The language I envision would still be physically typed on a keyboard, as most are today...
"Can advanced beings evolve beyond the need for civilization?"
Future Star Trek episode:
The crew arrives at what appears to be a completely lifeless planet, and sees the empty buildings of a futuristic city...
Crew member: "These warlike people must have annihilated themselves... nuclear war with advanced energy devices that penetrate buildings with high energy radiation, that leave the buildings but destroy all life..."
Beings in light bodies (who suddenly appear out of nowhere!): "No, it's totally cool -- all of us are still alive! We simply evolved beyond the need for civilization!"
Dumbfounded crew member (whose sociologic theory about civilizational collapse is now proven utterly and completely wrong!): "Oh..."
>"During the years there have been many attempts to improve Python speed; generally they fall into two categories:
Implement "full Python". To be able to support all dynamic features and be fast, they usually employ a Just In Time (JIT) compiler.
Examples are PyPy, GraalPy, Pyston, and CPython's own JIT.
Implement a "subset of Python" or "variant of Python", either as an Ahead of Time (AOT) or JIT compiler which is able to produce fast code. The usual approach here is to remove many (if not all) of the dynamic features which make Python hard to compile.
Examples are RPython, Mypyc, Cython and Numba.
The problem of "full Python" JIT compilers is that sometimes they work very well and produce huge speedups, other times they don't produce any speedup at all, or might even introduce slowdowns, or they might use too much memory, or they are slow to "warm up".
The problem of the subset/variant approach is that by removing the dynamic features of Python, you end up with something which does not feel pythonic, and in which many typical and idiomatic Python patterns just don't work. You often end up with "Java with Python syntax" (nothing in particular against Java, but I hope it gives an idea of what I mean)."
Isn't that interesting!
You know what I'd love to see?
A Python compiler that works on a function-by-function basis...
That is, for each function, make the determination if a straight "clean" compilation of the function is possible -- that is, if it doesn't use special Python features that make compilation difficult/challenging/complex/impossible -- or have dependencies on other functions that do...
So if you have a Python function that adds simple integer variables, let's say, then that can be converted to C or Assembler or VM or LLVM or QBE code in a straightforward way, but if a function uses objects and lambdas and higher-level/complex constructs in a way that makes compilation challenging, complex or impossible, then flag those functions (write them out in a log, so the programmer could optionally simplify them, etc.) -- those will continue to be interpreted like they always are...
Yes, some or all of that infrastructure might already exist... I'm just thinking aloud... :-)
>"The first problem is that Python is extremely dynamic. I'm not talking only about dynamic typing, but also about the fact that in a given Python process, the "world" is a moving target and "everything can change at any time" "
(If we had a future language that supported both interpreted and compiled functions, then the interpreter should have the concept of "locked" functions and objects -- that is, once the compiler has compiled something, it's now "locked", and changes (if we want to keep it dynamic) to it mean invoking the compiler again post-change, at runtime... it would be messy, but there could be a hybrid dynamic interpreted yet JIT compiled language... but it would require runtime interaction between the interpreter and compiler... which could be messy, to say the least... and yes, some of that infrastructure might already exist...)
>"Ever wonder what happens when you combine graphing algebraic curves with drawing in perspective? The result uncovers some beautiful relationships between seemingly different shapes, and all because of what happens when you
include infinity through projective geometry ."
...and the following might be of interest as well:
>"Sec. 3. AI Litigation Task Force. Within 30 days of the date of this order, the Attorney General shall establish an AI Litigation Task Force (Task Force) whose sole responsibility shall be to challenge State AI laws inconsistent with the policy set forth in section 2 of this order, including on grounds that such laws unconstitutionally regulate interstate commerce, are preempted by existing Federal regulations, or are otherwise unlawful in the Attorney General’s judgment, including, if appropriate, those laws identified pursuant to section 4 of this order. The Task Force shall consult from time to time with the Special Advisor for AI and Crypto, the Assistant to the President for Science and Technology, the Assistant to the President for Economic Policy, and the Assistant to the President and Counsel to the President regarding the emergence of specific State AI laws that warrant challenge."
It would seem logical to believe that there will be a number of AI-meets-law legal cases in the future, both in the U.S., it's States, and in the jurisdictions of foreign countries and their respective States/Districts/Regions...
I'm guessing (but not knowing) that the U.N. will have its own similar task force in the future -- as will other countries and their jurisdictional / law-making regions...
It will be interesting (at least from the perspective of a disinterested-in-outcome-but-interested-in-process legal observer) to see what cases (and also what laws/statutes) emerge in this area (Region Vs. Nation, Nation Vs. Region, Nation Vs. Nation, Region Vs. Region) in the future, and how they will be resolved...
(You know, for students of AI, students of Law, and students of The Future...)
reply