Johnny Autoseed Research

Open to grounded research collaborations

Johnny Autoseed is an open research concept documenting what small-scale automation can actually do in real spaces. If you have land, facilities, or community projects where a lightweight trial or documentation sprint could add value, we'd be glad to compare notes.

Best fit: partners who already have a site, a concrete question, and tolerance for experimentation.

From sketch to harvest data

How we approach research collaborations

We bring curiosity, documentation rigor, and a willingness to test ideas in real-world constraints. Our approach is collaborative: partners define the problems, we help design experiments and gather data.

What we're exploring

Areas of interest for research partnerships

We're interested in testing and documenting these areas with partners who have suitable sites or existing projects.

Space planning

Modeling layouts for raised beds, containers, and micro-farms to optimize footprint and access.

Lighting strategies

Testing low-cost LED setups and light schedules for different crops and indoor environments.

Irrigation & nutrition

Comparing simple watering systems, fertigation, and soil mixes under real-world conditions.

Data collection

Low-cost sensors and documentation methods to track crop performance and environmental factors.

Automation potential

Identifying tasks where simple robotics or automation could reduce labor in small-scale settings.

ALOHA-family hardware

The ALOHA 2 stack is the readable bimanual rig behind much of the recent ALOHA research line; policy tooling such as LeRobot often targets the same imitation-learning workflows. We use it as context when asking what teleop data would cost for harvest-adjacent tasks.

Open documentation

Creating shareable reports, datasets, and practical guides from trial results.

Landscape · Manipulation

Google DeepMind: advances in robot dexterity

— In a robotics team post, Google DeepMind highlighted two systems aimed at contact-rich, dexterous behavior: ALOHA Unleashed (bimanual imitation learning) and DemoStart (simulation-first curriculum learning for multi-fingered hands). The summary below is adapted from that announcement; follow the links for the full papers and project pages.

Read the original DeepMind article →

ALOHA Unleashed

Bimanual manipulation from human demonstrations, pushing past single-arm pick-and-place toward tasks that need two coordinated arms and delicate contact.

  • Reported real-world skills include tying shoelaces, hanging a shirt, repairing another robot, inserting a gear, and kitchen tidying — long-horizon tasks with deformable objects and tight clearances.
  • Builds on the ALOHA 2 platform (itself extending Stanford’s original low-cost, open ALOHA teleoperation stack) with hardware tuned for ergonomics and data collection.
  • Pipeline: remote teleoperation to gather hard demonstrations, then a diffusion-style policy that denoises actions from random noise (analogous in spirit to image diffusion) so the robot can replay those behaviors autonomously.

DemoStart

Reinforcement learning in simulation with a demonstration-led curriculum, targeting dexterous multi-finger hands where every extra joint makes control harder.

  • Starts from “easy” states and progressively trains on harder ones until the task is mastered — an auto-curriculum that reportedly needs on the order of 100× fewer simulated demonstrations than typical real-world demonstration budgets for comparable goals.
  • In simulation, the team reported >98% success on tasks such as cube reorientation, tightening a nut and bolt, and tidying tools; transferred policies reached ~97% on cube reorientation/lifting in hardware and ~64% on a precision plug–socket insertion.
  • Built with MuJoCo, using domain randomization and other sim-to-real techniques; evaluated on the three-finger DEX-EE hand developed with Shadow Robot.

For Johnny Autoseed, this line of work matters because harvest, wash-up, and kitchen-adjacent tasks are exactly where cheap arms fail first: contact, clutter, and coordination between two manipulators. It does not change our DIY-first stance — it clarifies what the research frontier looks like while FarmBot-class bed automation matures.

Real-world constraints

Learning from what actually happens

Lab-perfect conditions don't exist in backyards or community spaces. We're interested in studying how crops perform when real people manage them in real environments, complete with all the messiness that entails.

  • How plants respond to inconsistent care schedules and irregular access.
  • Low-cost sensor data collection methods anyone can replicate.
  • Documentation practices that make findings useful for others in similar contexts.
Raised beds under lights with robots and sensors monitoring plant growth
Observation over perfection. We document what actually happens when constraints are real and resources are limited.
Research opportunities we're seeking

Example partnerships we'd like to explore

These are the types of collaborations we're looking for; if you have a similar space or project, we'd love to talk.

Robot arm preparing food in a kitchen environment

Indoor Space Trials

Working with building owners or residents to test small-scale indoor growing in underused spaces like basements, storage rooms, or garages.

Questions: lighting efficiency, crop selection for low ceilings, simple automation for occasional access.

Rooftop & Outdoor Sites

Partnering with property managers, housing co-ops, or schools to study rooftop or courtyard growing with simple raised-bed setups.

Questions: structural load limits, weather exposure, irrigation logistics, community access models.

Community research space with plants, robots, and people observing trials

Community & Educational Projects

Collaborating with makerspaces, community centers, or food-justice groups to embed trials into existing educational or outreach programs.

Questions: accessible documentation, low-cost sensors, hands-on learning integration, data sharing.

Research themes

Questions we'd like to explore with partners

These are the types of questions that interest us; if you're working on similar problems or have space to test ideas, we'd welcome the conversation.

Fast-cycle greens

Can quick-turn crops work reliably in small indoor spaces with minimal intervention?

  • Lighting needs for 21–30 day cycles
  • Managing watering when access is inconsistent
  • Crop selection for low ceilings and small footprints

Local food logistics

How do you move produce a few blocks instead of a few states?

  • Delivery models for ultra-short distances
  • Storage and handling for micro-scale operations
  • Coordination across multiple small growing sites

Educational integration

Can indoor trials serve dual purposes — producing food and teaching STEM?

  • Simple data collection students can manage
  • Crop choices that deliver quick, visible results
  • Documentation practices that support learning goals

Have a space or project where we could explore these questions together? We'd welcome a conversation about potential collaboration.

Estimate your impact

What could a small trial produce?

Adjust the sliders to model a research trial in your space. Estimates use real data from our plant database of 40+ crops.

110
2 mo12 mo
21.7 kgestimated harvest
$182grocery value
54meal servings
10.8 kgCO\u2082 offset

Estimates assume beginner-friendly crops, standard growing conditions, and data from our open plant database. Actual results vary with climate, soil, and care consistency.