Unlearning With Poise

Amy Robertson & Karthik Dinakar

We discuss unlearning old habits and adapting software development and human potential practices for AI teams, emphasizing neuroscience-backed methods to foster resilience, psychological safety, and flexible innovation.

Unlearning with Poise

As the AI-world change accelerates, tech stacks can evolve at a furious pace, throwing traditional ways of doing software development ridden with fast-changing product priorities. This can evoke uncertainty, stress and cognitive overload. The brain craves stability and neuroendocrine equanimity, yet the world demands adaptation. This tension underscores the need for science-based approaches to how we think about people. Approaches rooted in behavioral neuroscience, organizational psychology, and evidence-backed strategies. We don’t know if anyone in the AI tech world has quite figured out how to do this in the right way, but getting it right is quite important for us in Pienso.

As AI floods the tech workplace with information, we think quite a bit on brain-friendly work structures, such as minimizing as much context switching, implementing focus periods for experimentation, collaboration and prioritization, and spending more time to reduce technical debt to diminish cognitive clutter. AI Engineers should feel safe being able to experiment successfully. Neuroscience shows that trust and psychological safety fosters learning and innovation, making it a prerequisite for a culture of rapid experimentation. Scrum practices, particularly sprint retrospectives, should provide a recurring opportunity for teams to reflect, voice concerns, and continuously refine both their work and interpersonal dynamics. We are trying to reinforce a high-trust culture where curiosity, failure, and course correction are seen as intrinsic to innovating. Evidence from neuroscience suggests that this kind of resilience can be cultivated through mindfulness, cognitive reframing, and structured and deliberate stress inoculation. The most important disposition is to be open to change and unlearning parts of old habits, which can be very hard for a lot of folks. We thought it might be useful to list some key unlearn-and-adapt learnings we’ve amassed since 2022:

1. Fixed sprint planning does not work for AI development

Unlearn: The idea that AI development can fit neatly into two-week sprints. Unlike software engineering, AI projects involve unpredictable experimentation, data collection, and model training, which don’t always align with fixed iterations.

Adapt: Use a hybrid Agile-Kanban approach, where research and exploratory phases have more fluid timelines, while engineering tasks remain sprint-driven. Be open to changing and adapting the process based on what works and what does not work.

2. Deliverable-based metrics don’t fully define or capture progress

Unlearn: The assumption that each sprint must result in a tangible, shippable feature. AI teams often work on model iterations, hyperparameter tuning, and failed experiments—all of which are valuable but may not produce a clear “done” increment.

Adapt: Define sprint goals around learning milestones rather than product features. Focus on insights gained (e.g., “This auto-regressive model underperformed due to data bias”) rather than only completed features.

3. Features harnessing GenAI cannot be built like traditional software features

Unlearn: The belief that AI development follows a straightforward build-test-release cycle. AI is inherently probabilistic, requiring continuous iteration and experimentation.

Adapt: Integrate DevOps/LLMOps into Scrum. Treat models like living systems that require constant monitoring, feedback loops, and retraining pipelines, rather than assuming they will be “done” after a sprint.

4. Customers cannot clearly define AI desires upfront

Unlearn: The idea that product owners can define AI requirements as clearly as they would for traditional software. AI use cases often evolve as teams uncover new data patterns or limitations. Enterprise customers often cannot define their needs at the start of a contract.

Adapt: Emphasize discovery-driven development—engage customers in iterative validation loops where models are tested in real-world conditions early and often, instead of waiting until full production.

5. AI Engineers should not be viewed like traditional developers

Unlearn: The assumption that AI engineers operate the same way as front-end or back-end developers. AI teams often have engineers who wear different hats during development cycles, including data science, research, and ML engineering, that require different workflows and different mental maps.

Adapt: Avoid excessive daily stand-ups or rigid sprint rituals that interrupt long model training cycles. Balance structured agile processes with flexible research-driven work.

We’ve been trying to incorporate science-based strategies to enhance workplace flexibility to keep stress minimized and foster a culture of trust within our teams, similar to the culture at MIT. Folks should focus on being the most generative and best versions of themselves, shielded as much as possible from the vagaries of the fact-changing business world. Studies have demonstrated that employees with greater job flexibility and security are less likely to experience serious psychological distress or anxiety. This balance is not easy to arrive at, but it is something we take very seriously at Pienso. Here are some of our own unlearnings when it comes to organizational health for a generative AI workplace.

1. Successful AI talent management teams prioritize organizational health principles

Unlearn: The practice of thinking that HR teams must start with overly engineered HR processes and policies. Be cautious of traditional talent approaches. Overemphasizing these too early in a company’s journey squelches trust and creativity. AI talent thrives in cultures that foster well-being, learning, iteration and work that makes a difference.

Adapt: Start with actions in support of creating a healthy organization first. Understand business goals. Spend time with employees and learn what motivates them and what makes them interesting and different. Then create fit-for-purpose talent programs and processes that nurture their best work. If you need a good starting place for this, go see Josh Bersin’s The Healthy Organization Framework.

2. Successful AI talent management requires continuous individual development planning

Unlearn: The idea that AI talent fits a normal organizational bell curve who will patiently wait for an annual development plan. AI talent is highly intelligent and intrinsically motivated by curiosity and problem-solving. (For those of you comfortable with 9-box performance / potential evaluations of talent, you can think of AI talent as all Top Performers,with Growth Potential or High Potential.)

Adapt: Provide employees with frequent feedback and encourage followup conversations after their self-reflection. Be quick to recognize improvements and address any derailers as they emerge. AI talent enjoys a good puzzle - incorporate unstructured dialogue on industry advances, new technologies and abstract customer challenges while discussing career interests to connect individual motivation with potential impact.

3. Healthy conflict and tension are productive for innovation

Unlearn: The thought that conflict should be avoided, minimized and judged. Because AI talent is exploratory and inquisitive, each person has ideas to contribute. In a fast-paced, changing industry with rich diversity of thought, it’s inevitable that overused strengths may create friction among teammates.

Adapt: Talent management teams should be leadership coaches first and encourage regular reflection on personal biases and stress reactions. AI teams are composed of multi-cultural talent from many different backgrounds, upbringings and local customs. This, along with the fluidity and importance of quick value delivery to customers, naturally puts a strain on interpersonal relationships especially if team members lack understanding of themselves or respect for others. What differentiates successful AI talent is the ability to accept feedback elegantly and challenge ideas respectfully. Addressing these situations early, directly and with curiosity by leadership and HR teams, will help facilitate healthy conflict.

Unlearning with poise is about developing a constant learning organization that’s ready for whatever the future brings. The field of large language models will continue to change at lighting speed, but with a strong foundation of trust, ongoing development, and open dialogue, we aspire to ride the waves of change rather than be drowned by them. By consciously blending technical innovation with organizational health, we want to create not only better products but also better teams – teams that are curious, courageous, and capable of reinventing themselves as the world around them transforms so quickly.

References

Neuroscience, trust, and stress

Edmondson, A. C. (1999). Psychological safety and learning behavior in work teams. Administrative Science Quarterly, 44(2), 350–383.

  • Foundational research showing how a climate of trust and psychological safety enables learning and team innovation.

Rock, D. (2008). SCARF: A brain-based model for collaborating with and influencing others. NeuroLeadership Journal, 1, 1–9.

  • Explains how the social brain (through status, certainty, autonomy, relatedness, and fairness) shapes trust, stress responses, and collaboration in teams.

Meichenbaum, D. (2007). Stress inoculation training: A preventative and treatment approach. In P. M. Lehrer, R. L. Woolfolk, & W. E. Sime (Eds.), Principles and practice of stress management (3rd ed., pp. 497–518). Guilford.

  • Classic work on “stress inoculation,” describing structured techniques that help individuals build cognitive resilience under changing or challenging conditions.

Covey, Stephen M.R. with Merrill, R. (2006). The Speed of Trust. Free Press.

  • Provides a roadmap for establishing trust.

Mindfulness, cognitive reframing, and resilience

Kabat-Zinn, J. (2003). Mindfulness-based interventions in context: Past, present, and future. Clinical Psychology: Science and Practice, 10(2), 144–156.

  • Lays out mindfulness-based methods for stress reduction and resilience, frequently referenced in organizational settings.

Dweck, C. S. (2006). Mindset: The new psychology of success. Random House.

  • Explores the “growth mindset” (openness to learning and continuous adaptation), directly relevant to the notion of “unlearning” and experimentation culture.

Agile, scrum, and AI/ML development Practices

Schwaber, K., & Beedle, M. (2002). Agile software development with Scrum. Prentice Hall.

  • One of the seminal works on Scrum, including retrospectives, planning, and iterative improvement—key to aligning Agile ideas with more fluid AI experimentation.

Larman, C., & Vodde, B. (2016). Large-scale Scrum: More with LeSS. Addison-Wesley.

  • Discusses adapting Scrum principles at scale while maintaining flexibility—informative for hybrid AI/ML approaches that do not fit neatly into rigid sprints.

Humble, J., Molesky, J., & O’Reilly, B. (2014). Lean Enterprise: How high performance organizations innovate at scale. O’Reilly Media.

  • Shows how continuous learning, DevOps, and iterative feedback loops are essential for technology-driven organizations—parallels LLMOps for continuous model updates.

Bredehoft, A., Gruhn, V., & Striemer, R. (2022). MLOps: Emerging trends in data, code, model, and pipeline management. IEEE Software, 39(2), 15–20.

  • Overview of the emerging MLOps discipline, highlighting the “living system” nature of AI models and the need for ongoing retraining/monitoring.

Psychological safety, team culture, and organizational health

Edmondson, A. C. (2018). The fearless organization: Creating psychological safety in the workplace for learning, innovation, and growth. Wiley.

  • Extended treatment of psychological safety—argues that a climate of trust is integral to rapid experimentation and candid feedback in knowledge-intensive fields like AI.

Lencioni, P. (2002). The five dysfunctions of a team: A leadership fable. Jossey-Bass.

  • While more of a practitioner fable, it is widely cited in organizational psychology for highlighting the importance of healthy conflict and trust in team effectiveness.

Bersin, J. (2022). The healthy organization: A framework for organizational health. The Josh Bersin Company.

  • A practitioner framework emphasizing a culture of well-being, agility, trust, and continuous learning as foundational to organizational performance and innovation.

Talent management, flexibility, and well-being

Aguinis, H. (2019). Performance management (4th ed.). Chicago Business Press.

  • Details performance-management practices such as frequent feedback, personal development plans, and the strengths/limitations of 9-box models, relevant for AI talent.

Greenhalgh, L., & Rosenblatt, Z. (1984). Job insecurity: Toward conceptual clarity. Academy of Management Review, 9(3), 438–448.

  • Classic article linking job insecurity to stress and reduced well-being—underscoring your point that flexibility and a sense of safety reduce serious psychological distress.

Mark, G., Gudith, D., & Klocke, U. (2008, April). The cost of interrupted work: More speed and stress. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 107–110). ACM.

  • Empirical study on context switching and how constant interruptions trigger stress and reduce deep-focus productivity, an important concern for AI engineers.

Reitz, M., & Chaskalson, M. (2016). Mindfulness in the workplace: The relationships among mindfulness, trust, and psychological safety. Academy of Management Proceedings, 2016(1).

  • Provides evidence that mindfulness interventions in organizations can increase trust and psychological safety, critical for high-performing teams.

Whitehurst, Jim (2015). The Open Organization: Igniting Passion and Performance. Harvard Business Review Press

  • Describes how open management principles in a fast-paced organization can be leveraged for business success.