Anim AI

This project physical and cell simulations for evolving neural architectures for embodied agents.

The best deep-learning image classification models are prone to adversarial pixel attacks. Event the latest transformer models 'hallucinate' and can't apply basic reasoning, demonstrating they are still stochastic parrots.


Logo

The human brain contains dozens of specialized, interlocked circuits that have been tuned and extended over huge evolutionary time frames. While we've made significant progress creating biologically-inspired models that achieve good performance for specialized tasks, expanding into more complete systems is still out of reach.

This project's theory is that the iterative approach to AGI - understanding, and abstracting systems up layer by layer, will be slower than using simulated natural evolution if we can construct the right solution space.


Will this work? Solution spaces that are complex enough to allow AGI to evolve can be enormous with many dimensions and presumably many local minima - therefore it's expected to take unreasonably large amounts of compute resources to make any real progress.


It's possible to get lucky with the right simulation, biases, and initial conditions, plus compute resources will also continue getting cheaper and faster over time. There's an exciting chance it could create useful partial results, or at least be a useful experience for learning and for writing about.

This is currently a closed-source work-in-progress.

Technology

  • Built with Vulkan, C++, and Dear ImGui
  • Some prototyping and planned work is with NVidia FleX and CUDA
  • A previous iteration was built in Rust, but I rewrote it to make easier use of C/C++ libraries