Key Takeaway
Systems designed in Silicon Valley break in field conditions. Success requires continuous iteration with end users, adapting to unreliable connectivity, local workflows, and cultural context.
Problem Statement
Ethiopia's health system had fragmented data across multiple platforms—disease surveillance systems, health facility reporting, vaccination records, supply chain databases. Each system served a purpose, but none talked to each other. Decision-makers couldn't get a unified view of health outcomes, resource allocation, or emerging issues.
This fragmentation had real consequences:
- Disease outbreaks detected late because data from different systems wasn't correlated
- Supply chain failures where clinics ran out of essential medicines while warehouses had surplus
- Resource allocation based on outdated or incomplete information
- Hours spent by health workers manually compiling reports rather than treating patients
The Ministry of Health needed integrated infrastructure that could unify disparate data sources, provide real-time visibility, and support data-driven decision making—all while working within the realities of limited connectivity, varying technical capacity, and existing workflows.
Technical Approach
As co-founder of Zenysis (YC W16), I led field deployment of an integrated health data platform in Ethiopia. The work required building technical infrastructure while navigating operational realities, ministry partnerships, and on-the-ground implementation challenges.
Platform Development
- Data Integration: Built connectors to unify data from DHIS2, OpenMRS, logistics systems, and custom ministry databases
- Real-Time Processing: Created pipelines to process incoming data and surface insights immediately rather than requiring manual report compilation
- Offline Capability: Designed system to work with intermittent connectivity—common in Ethiopian field conditions
- User-Centered Design: Worked directly with ministry officials and health workers to understand workflows and build interfaces that matched actual use patterns
Field Deployment
Living in Addis Ababa for months, I worked alongside ministry teams to ensure the system worked in practice, not just in theory:
- Daily iteration based on user feedback and observed pain points
- Training programs adapted to varying levels of technical literacy
- Troubleshooting connectivity issues, data quality problems, and workflow mismatches
- Building trust with stakeholders through consistent presence and responsiveness
- Adapting system design to accommodate local context—from internet reliability to government procurement cycles
Impact
The platform enabled real-time health monitoring and resource allocation across Ethiopia:
Operational Improvements
- Unified Dashboard: Ministry officials gained single-pane view of health system performance across all regions
- Faster Response: Disease outbreak detection time reduced from weeks to days through automated correlation of surveillance data
- Resource Optimization: Supply chain visibility prevented both stockouts and waste
- Data-Driven Decisions: Evidence-based resource allocation replaced intuition and anecdote
Systemic Change
- Established patterns for government data integration that influenced subsequent health IT investments
- Demonstrated value of real-time data infrastructure in resource-constrained settings
- Built institutional capacity within ministry for data-driven decision making
What I Learned
Field deployment is where theory meets reality. Systems that work perfectly in San Francisco break in unexpected ways in Ethiopia—from power outages to connectivity issues to browsers that haven't been updated in years. You can't design for field conditions from an office; you have to be there, observing failures, adapting continuously.
Users know their workflows better than you ever will. Ministry officials had deep expertise in health operations that our team lacked. When our design assumptions conflicted with their workflows, they were right and we were wrong. User-centered design isn't about surveys or personas—it's about continuous engagement with people actually doing the work.
Trust is earned through presence and responsiveness. Government partnerships don't succeed because of clever sales pitches. They succeed when stakeholders see you're committed for the long haul, responsive to problems, and genuinely invested in their success. Living in Addis, being available, showing up consistently—that built relationships that enabled deployment.
Technical infrastructure is only 30% of the work. The other 70% is training, change management, stakeholder alignment, process documentation, and ongoing support. Software doesn't transform systems; software plus sustained human effort transforms systems.
Sometimes slow is fast. We could have pushed for faster rollout, but taking time to ensure each deployment worked well, users were comfortable, and bugs were fixed created momentum. Rushed deployment would have generated resistance and failure. Patience created success.
Bridging Worlds
This project required constant translation between different domains:
- Engineers in San Francisco ↔ Ministry officials in Addis Ababa
- Software development cycles ↔ Government procurement timelines
- Technical architecture decisions ↔ Operational constraints
- Product roadmap priorities ↔ Urgent ministry needs
Success came from operating in that gap—understanding technical possibilities and operational realities, translating between cultures and contexts, building systems that worked for actual users in actual conditions.
These lessons carried through to my subsequent work at Aclima (field deployment of environmental sensors), UN Global Pulse (translating data science for policy), Palantir (deploying AI in government), and Intramotev (autonomous systems in diverse rail environments). The core lesson remains: field deployment requires humility, iteration, and genuine partnership with end users.