Top 12 KPI for DevOps Teams to Track Success - Techstack Digital

Top 12 KPI for DevOps Teams to Track Success

top 12 kpi for devops teams

TL;DR – kpi for devops

DevOps teams rely on clear KPIs to track speed, stability, quality, and operational efficiency. Monitoring metrics like deployment frequency, CFR, MTTR, defect rates, and time to market helps teams improve performance and reduce risk. These KPIs enable predictable delivery, stronger collaboration, and better customer experiences. By measuring consistently, teams build scalable, resilient, high-performing DevOps systems.
 

Top 12 DevOps KPIs to Track Success: Essential Metrics for High Performance


Modern engineering teams rely on speed, quality, and reliability to stay competitive. Therefore, understanding the right kpi for Devops becomes essential for tracking performance and driving improvement. DevOps organizations depend on metrics that reflect stability, agility, team productivity, and user satisfaction. Furthermore, automation, cloud adoption, and modern delivery pipelines require clear measurement frameworks. Without tracking performance, teams struggle to scale efficiently or respond to change. Additionally, companies such as Techstack Digital use strong KPI frameworks to align engineering, operations, and business objectives. This guide explores the top KPIs, why they matter, and how teams can use them to create predictable and sustainable delivery processes.

What Are DevOps KPIs?

DevOps KPIs are measurable indicators that evaluate how effectively engineering teams design, build, test, deploy, and maintain software systems. These KPIs offer visibility into speed, stability, quality, and operational excellence. Additionally, they help organizations identify bottlenecks, streamline workflows, and improve reliability. DevOps KPIs track important areas such as deployment frequency, change failure rate, incident response, defect trends, and operational workload. Furthermore, these KPIs support consistent improvement by quantifying progress and guiding decision-making. Each KPI reflects an aspect of the team’s DevOps maturity. Metrics like DORA indicators become industry standards for assessing performance. When used consistently, DevOps KPIs transform engineering teams into predictable and high-performing units. Therefore, tracking KPIs helps teams move from guesswork to data-driven action. Every DevOps practice benefits from a clear, structured measurement approach.

Why Measuring DevOps Performance Matters

Measuring DevOps performance makes improvement intentional rather than reactive. Teams often assume they are moving fast simply because deployments happen frequently. However, KPIs reveal the real story behind the delivery pipeline. By monitoring performance, teams identify gaps in automation, testing depth, or operational readiness. Additionally, KPIs help clarify whether changes improve or harm stability. When teams track precise metrics, they avoid assumptions and base decisions on facts. Furthermore, KPIs highlight which processes need investment, such as environment provisioning, CI/CD configuration, or testing workflows. Tracking performance removes ambiguity and supports long-term success. Leadership teams also use performance data to justify budgets, training, and upgrades. Therefore, DevOps KPIs serve both operational and strategic value. They ensure that teams build resilient, high-performing systems while aligning engineering output with business outcomes. This visibility leads to better communication and predictable software delivery.

The Link Between DevOps KPIs and Business Success

DevOps KPIs influence business results because technology powers most modern operations. Faster deployments lead to quicker product improvements, enabling companies to deliver features that attract and retain customers. Additionally, stability metrics ensure systems stay reliable, reducing downtime and maintaining brand reputation. Monitoring KPIs also improves operational cost efficiency. For example, optimizing pipelines and environments reduces cloud waste. Furthermore, quality metrics reduce the risk of production incidents, lowering support costs and enhancing customer trust. When organizations track the right KPIs, engineering output aligns with revenue goals, customer satisfaction, and strategic growth. DevOps KPIs therefore become more than technical indicators—they reflect business performance. Strong KPI frameworks support innovation, resilience, and long-term success. They empower teams to adjust quickly, experiment confidently, and improve continuously. As a result, companies achieve predictable delivery, reduced risk, and sustainable scaling.

Why Tracking DevOps KPIs Matters

Building a Culture of Continuous Improvement

Continuous improvement requires visibility. DevOps teams cannot improve what they cannot measure. Tracking KPIs encourages teams to iterate, experiment, and refine delivery processes. Additionally, metrics highlight weak areas such as long build times, frequent deployment failures, or high defect escape rates. When teams see progress reflected in numbers, they stay motivated and focused. KPIs also promote accountability across engineering, QA, operations, and product roles. Furthermore, continuous improvement cultures rely on transparent data that exposes bottlenecks without assigning blame. By tracking KPIs consistently, teams normalize learning, experimentation, and feedback. This culture accelerates innovation and strengthens collaboration. Over time, KPIs guide teams toward streamlined pipelines and optimized operations. They also help teams assess new tools or methodologies objectively.

Driving Collaboration Between Development and Operations

DevOps bridges two historically separate disciplines. Tracking KPIs unites developers and operations under shared goals. Metrics like deployment frequency, incident response time, and change failure rate reflect collective ownership. Additionally, KPIs reduce friction by clarifying responsibilities and reducing miscommunication. Developers understand how their code affects stability, while operators understand how workflows impact speed. Furthermore, unified KPIs help avoid siloed optimization. For example, faster deployments mean nothing if operational failures remain high. By monitoring KPIs together, teams find balance between speed and stability. This shared visibility improves trust and collaboration. As teams collaborate deeply, DevOps maturity increases. KPIs become the link that aligns both sides of the engineering lifecycle.

Using KPIs for Data-Driven Decision Making

DevOps KPIs transform decision-making from assumptions to evidence-based planning. Leaders and engineers make better decisions when they rely on consistent metrics. For example, long cycle times may indicate a need for more automation, deeper testing, or improved communication. High MTTR may reveal gaps in monitoring or alerting systems. Additionally, KPIs help prioritize investments. Teams use KPI data to justify better CI/CD tools, faster build servers, or improved cloud configurations. Furthermore, KPIs support experimentation by making results measurable. When teams test new workflows, metrics show whether performance improved or declined. This data-driven approach reduces waste, increases efficiency, and strengthens alignment across engineering and business functions. KPIs become the foundation for predictable performance.

The Core DORA Metrics Every DevOps Team Should Monitor

Deployment Frequency

Deployment frequency represents how often code changes ship to production. High-performing teams deploy multiple times per day, while slower teams may deploy weekly or monthly. Additionally, frequent deployments reduce risk because changes become smaller and easier to troubleshoot. Deployment frequency reflects team agility and CI/CD pipeline quality. When deployment frequency drops, it signals bottlenecks or inefficiencies in testing, approvals, or environment readiness. Furthermore, this metric helps teams evaluate delivery maturity. High frequency indicates strong automation, fast feedback loops, and resilient pipelines. By tracking deployment frequency, teams align release cadence with business goals. This KPI becomes especially important for SaaS companies and customer-facing applications.

Definition and Importance

Deployment frequency measures how often a team successfully releases code into production or a staging environment. It indicates how quickly new features, fixes, and improvements reach end users. Additionally, high deployment frequency reduces risk by minimizing the size of each change. Frequent deployments create faster feedback loops and enable iterative development. This KPI reflects a team’s ability to deliver value rapidly.

How to Measure Deployment Frequency

Teams measure deployment frequency by counting successful deployments within a defined time frame, such as daily, weekly, or monthly intervals. Automated CI/CD dashboards simplify this tracking. Additionally, teams must differentiate between production deployments and internal builds to maintain accuracy. Monitoring tools integrated with pipelines provide real-time visibility. Clear tracking ensures consistent evaluation.

Best Practices to Improve Deployment Cadence

Improving deployment frequency begins with automation. Teams should automate testing, environment provisioning, and approvals. Additionally, reducing manual steps minimizes delay. Feature flags help release small changes safely. Furthermore, teams should eliminate bottlenecks in CI/CD pipelines. Regularly reviewing build times and test coverage also accelerates delivery.

Lead Time for Changes

Lead time for changes measures how long it takes from code commit to production release. Shorter lead times reflect efficient workflows and strong automation. Additionally, this metric identifies friction in the development pipeline. Long lead times signal problems in testing depth, review cycles, or environment stability. Lead time connects directly to business agility because faster changes mean faster responses to customer needs. Furthermore, reducing lead time improves developer satisfaction by enabling rapid iteration. Teams track lead time to understand how quickly they can innovate. Improving lead time requires collaboration across engineering, QA, and operations. This KPI provides insight into pipeline efficiency and responsiveness.

What It Measures

Lead time measures the duration from the moment developers commit code until that code ships to production. It reveals how efficiently teams move work through the pipeline. Additionally, it highlights delays in testing, reviews, or deployment stages. Short lead times reflect streamlined workflows and strong automation. This KPI indicates delivery speed.

Why Shorter Lead Times Matter

Shorter lead times improve business agility by enabling rapid releases. Teams deliver features faster, fix bugs quickly, and respond to market changes. Additionally, shorter lead times reduce backlog pressure and improve developer morale. They also reduce operational risk because smaller changes deploy more safely.

Tools and Techniques to Reduce Lead Time

Teams reduce lead time by automating builds, tests, and deployments. CI/CD tools like GitHub Actions, GitLab CI, or Jenkins simplify workflows. Additionally, optimizing pull request reviews and reducing dependencies accelerates movement. Improving test coverage shortens review cycles.

Change Failure Rate (CFR)

Change Failure Rate represents the percentage of deployments that lead to failures in production. High CFR indicates unstable releases, inadequate testing, or misaligned processes. Additionally, CFR helps teams evaluate release quality and identify systemic weaknesses. Monitoring CFR reveals whether teams move too fast without sufficient validation. A low CFR reflects high-quality engineering practices, strong automation, and reliable environments. Furthermore, CFR guides leaders in balancing innovation with stability. Reducing CFR requires collaboration between developers, QA, and operations. Teams should analyze patterns behind failures to improve testing, code quality, and monitoring. CFR becomes a cornerstone metric for reliability and performance.

Understanding CFR in DevOps

CFR measures how many deployments result in incidents, degraded service, or rollbacks. It reflects release quality and system resilience. Additionally, CFR highlights whether teams apply sufficient testing and monitoring. A lower CFR indicates maturity in automation and validation.

Benchmarking Success Rates

Elite teams maintain CFR below 15%. Average teams hover between 20% and 30%. Tracking benchmarks helps teams understand where they stand. Teams compare CFR trends over time to assess improvement.

Strategies to Minimize Change Failures

Teams reduce CFR by improving automated tests, strengthening code reviews, and implementing canary releases. Additionally, monitoring pre-production environments helps catch issues earlier. Regular post-incident reviews also drive improvement.

Mean Time to Recovery (MTTR)

Mean Time to Recovery measures how long it takes to restore service after an incident. MTTR reflects operational readiness and incident response quality. Additionally, low MTTR indicates strong monitoring systems, fast troubleshooting, and efficient rollback processes. High MTTR signals gaps in alerting, documentation, or ownership. MTTR matters because downtime impacts revenue, user trust, and reputation. Furthermore, improving MTTR requires collaboration between operations, developers, and SREs. Teams use MTTR to evaluate resilience in real-world environments. Reducing MTTR helps teams achieve high availability and predictable service performance.

What MTTR Reveals About System Resilience

MTTR shows how quickly teams recover from failures. Low MTTR means systems withstand disruptions effectively. Additionally, MTTR reflects monitoring depth and incident response readiness. It is a key metric for reliability.

Common Causes of High MTTR

High MTTR often results from weak alerting, poor documentation, low observability, or unclear ownership. Slow troubleshooting and manual recovery steps also increase recovery time.

Ways to Improve Recovery Time

Teams improve MTTR by automating rollback procedures, enhancing monitoring, and documenting playbooks. Additionally, proactive incident training helps teams respond faster. Clear ownership speeds resolution.

Additional Deployment and Change Metrics

Change Volume

Change volume measures how many changes enter the pipeline over a specific period. High change volume may indicate increased productivity or feature demand. Additionally, excessive change volume can overwhelm pipelines and increase operational risk. Monitoring this KPI helps teams balance speed with stability. It also reveals whether engineering workloads align with product goals. Furthermore, high change volume combined with high CFR often signals insufficient testing or review capacity. Teams track change volume to understand workload distribution and optimize processes. Transparent measurement improves planning and resource allocation. This KPI helps prevent burnout and maintain consistent delivery cadence.

Definition and Relevance

Change volume represents total changes pushed through the pipeline. It shows team output and workload intensity. Additionally, it helps identify whether teams manage a sustainable pace. High volume must correlate with quality metrics.

How to Balance Change Volume and Stability

Teams balance change volume by prioritizing changes, improving testing, and breaking work into smaller pieces. Additionally, using feature flags helps reduce risk. Stable pipelines support higher change volume safely.

Deployment Time / Deployment Speed

Deployment time measures how long it takes to push a change from pipeline trigger to production. Faster deployments reflect strong automation, well-designed pipelines, and efficient infrastructure. Additionally, long deployment time creates delivery bottlenecks and friction. Monitoring deployment time shows whether scripts, builds, or tests slow the workflow. Furthermore, improving deployment speed increases agility and reduces cycle times. Teams analyze deployment steps to eliminate unnecessary processes and simplify workflows. This KPI helps engineer smoother transitions and predictable deployments.

Why Deployment Time Matters

Short deployment time increases agility and reduces wait times. Additionally, faster deployments help maintain momentum during development. Long deployment times frustrate teams and slow releases.

How to Streamline Deployment Pipelines

Teams streamline pipelines by optimizing build steps, caching dependencies, and automating approvals. Additionally, modern CI/CD tools help reduce overhead. Pipeline reviews eliminate outdated tasks.

Cycle Time

Cycle time measures how long it takes for a task to move from “in progress” to completion. It reveals productivity levels and workflow efficiency. Additionally, long cycle times may indicate bottlenecks in reviews, testing, or development processes. Teams track cycle time to improve flow and reduce delays. Cycle time complements lead time but focuses more on internal team processes. Furthermore, reducing cycle time improves predictability and morale. Monitoring this KPI helps teams identify which stages slow down progress.

What Cycle Time Measures in DevOps

Cycle time measures active work duration. It shows how efficiently tasks progress. Additionally, long cycle time highlights workflow friction. This KPI reflects team responsiveness and capacity.

Reducing Bottlenecks in the Development Lifecycle

Teams reduce cycle time by improving collaboration, automating tests, and simplifying approvals. Additionally, removing handoff delays improves flow. Regular workflow analysis prevents recurring issues.

Quality, Stability, and Reliability Metrics

Defect Escape Rate

Defect escape rate measures how many bugs reach production instead of being caught earlier. High escape rates indicate weak testing coverage or insufficient quality controls. Additionally, this KPI reflects the effectiveness of QA and automated testing. Teams track escape rates to improve early detection. Furthermore, reducing defect escapes prevents costly incidents and improves customer experience. This KPI supports continuous quality improvement.

Measuring Escaped Defects

Teams count defects found in production versus those caught earlier. Additionally, tracking trends reveals whether testing improvements work. Low escape rates reflect strong testing pipelines.

Improving Testing to Catch Issues Earlier

Teams reduce escape rates by improving automation, expanding coverage, and using advanced testing such as smoke tests, regression suites, and chaos engineering.

Defect Volume

Defect volume measures total defects discovered over time. This KPI reflects system health and code quality. Additionally, increasing defect volume signals deteriorating code or rushed deployments. Monitoring volumes reveals patterns in modules, teams, or releases. Furthermore, defect volume informs resource allocation for QA and stability improvements. Teams use this metric to drive long-term quality.

Tracking Total Defects Over Time

Teams monitor defect counts per release cycle. Additionally, trends show whether code quality improves or declines. Clear categorization improves analysis.

Using Defect Trends to Drive Quality Improvements

Defect patterns guide refactoring and test investments. Additionally, focusing on high-defect areas improves stability. Continuous review reduces systemic issues.

Failed Deployment Rate

Failed deployment rate tracks deployments that do not succeed or require rollback. It differs from CFR by including pipeline issues. Additionally, high failure rates suggest weak automation or misconfigured environments. Monitoring this KPI helps teams refine deployment pipelines.

How It Differs from CFR

CFR measures production failures, while failed deployment rate includes failures earlier in the pipeline. Additionally, it captures CI/CD reliability issues.

Mitigating Deployment Failures Through Automation

Automation reduces human error. Additionally, improving environment parity and adding pre-deployment checks lowers failure risks.

Availability / Uptime

Availability measures system reliability and service uptime. High availability reflects resilient systems, strong architecture, and proactive monitoring. Additionally, uptime becomes essential for customer trust and business continuity. Teams track availability to ensure SLAs remain intact.

Measuring Service Reliability

Teams measure uptime as a percentage of total availability. Additionally, SRE tools help track reliability metrics precisely.

The Role of SLIs and SLOs

SLIs define what to measure. SLOs define expected performance. Additionally, these guide reliability efforts across teams.

Operational Efficiency and Workload Metrics

Unplanned Work

Unplanned work includes incidents, urgent fixes, and unexpected tasks. High unplanned work disrupts planned sprints and slows progress. Additionally, this KPI reveals stability issues.

Identifying Reactive Workload Patterns

Teams track how much work comes unexpectedly. Patterns highlight deeper system issues requiring attention.

Steps to Reduce Unplanned Incidents

Teams reduce unplanned work by improving monitoring, hardening systems, and automating repetitive tasks. Additionally, incident analysis prevents repetition.

Customer Ticket Volume

Customer ticket volume reflects user-facing issues. High ticket volume signals quality or usability problems. Tracking this KPI aligns engineering with user expectations.

How Customer Issues Reflect Software Quality

Support trends reveal hidden defects and gaps. Additionally, recurring tickets highlight problem areas.

Leveraging Support Data for Continuous Improvement

Teams use support patterns to guide fixes. Additionally, aligning engineering with support reduces long-term issues.

SLA Compliance

SLA compliance measures how well teams meet service level commitments. High compliance builds trust and meets customer expectations. Additionally, SLA breaches can result in penalties.

Monitoring Service Level Performance

Teams monitor response time, uptime, and reliability. Additionally, consistent tracking ensures commitments stay intact.

Meeting Commitments Through Proactive Monitoring

Teams prevent SLA breaches through monitoring, automation, and incident readiness. Additionally, strong SRE practices support compliance.

Business and Value-Driven Metrics

Time to Market

Time to market reflects how quickly teams deliver features. Faster time to market strengthens competitive advantage. Additionally, it links engineering efficiency with business success.

Linking DevOps Efficiency to Business Outcomes

Faster releases drive revenue and user satisfaction. Additionally, strong pipelines accelerate value delivery.

Techniques to Accelerate Delivery

Teams speed delivery by automating tests, optimizing pipelines, and improving collaboration.

Cost per Deployment / Operational Cost Efficiency

Cost per deployment measures financial efficiency. High deployment costs indicate inefficient pipelines or cloud waste. Additionally, reducing cost improves sustainability.

Measuring Cost Efficiency in DevOps Pipelines

Teams track compute usage, cloud costs, and resource overlap. Clear tracking helps reduce waste.

Optimizing Resource Usage and Cloud Spending

Teams optimize environments, scale resources properly, and leverage automation. Additionally, cost reviews maintain efficiency.

Developer Productivity Metrics

Developer productivity metrics include pull request throughput, cycle completion, and delivery speed. These reflect team workflow efficiency. Additionally, clear metrics prevent micromanagement.

Tracking Pull Request Size, Throughput, and Cycle Completion

Tracking PR patterns reveals bottlenecks. Additionally, small, frequent PRs improve flow.

Identifying Bottlenecks Without Micromanaging

Teams use metrics for improvement, not pressure. Additionally, transparency helps collaboration.

Security and Compliance Metrics

Security metrics track time to detect vulnerabilities, patch cycles, and compliance adherence. These KPIs protect systems from attacks. Additionally, DevSecOps integrates security earlier.

Time to Detect and Fix Vulnerabilities (80–100 words)

Short detection and patch times reduce risk. Additionally, automated scans enforce safety.

Incorporating DevSecOps into KPI Frameworks

Teams embed security into pipelines. Additionally, compliance reporting improves governance.

How to Choose and Prioritize the Right KPIs for Your Team

Aligning KPIs with Business Objectives

KPIs must support business goals. Teams align metrics with revenue, stability, and customer satisfaction. Additionally, each KPI should drive strategic outcomes.

Mapping Technical Metrics to Strategic Goals

Technical KPIs influence business metrics. Additionally, mapping ensures alignment.

Leading vs Lagging Indicators

Leading indicators predict performance. Lagging indicators reflect outcomes. Both are essential. Additionally, balance improves visibility.

Understanding Predictive vs Retrospective Metrics

Predictive metrics forecast trends. Retrospective metrics show past performance. Additionally, both guide decisions.

Avoiding Vanity Metrics and Metric Overload

Vanity metrics mislead teams by appearing positive without driving improvement. Teams must focus on meaningful KPIs. Additionally, too many metrics cause confusion.

Focusing on What Truly Drives Performance

Teams prioritize KPIs that impact speed, quality, and stability. Additionally, removing noise strengthens clarity.

Tips for Implementing and Tracking DevOps KPIs Effectively

tips for implementing and tracking kpi for devops

Setting Baselines and Realistic Targets

Baselines define where teams stand. Targets guide improvement. Additionally, realistic goals maintain motivation.

Establishing Initial Performance Benchmarks

Teams measure current performance before setting goals. Additionally, benchmarking improves accuracy.

Building Dashboards and Automation Pipelines

Dashboards provide visibility. Automated pipelines track metrics easily. Additionally, visualization improves decision-making. Tools like Grafana, Splunk, and Datadog support real-time tracking.

Best Tools for Real-Time DevOps Monitoring (e.g., Grafana, Splunk, Datadog)

Top tools visualize pipelines and operational metrics. Additionally, integrations simplify tracking and alerting.

Continuous Review and Evolution of Metrics

KPIs must evolve as systems scale. Teams review metrics regularly. Additionally, adjusting KPIs improves relevance.

How to Adapt KPIs as Teams and Systems Mature

Teams refine KPIs to match growth. Additionally, ongoing review ensures accuracy.

Common Challenges in Measuring DevOps KPIs

Data Fragmentation Across Tools

Data exists across CI/CD, monitoring, and ticketing tools. Fragmentation reduces visibility. Additionally, integration solves this issue.

Misaligned Goals Between Teams

Teams may prioritize speed over stability. Misalignment causes conflict. Additionally, shared KPIs increase unity.

Overemphasis on Output Instead of Outcomes

Output metrics focus on volume. Outcome metrics reflect impact. Additionally, focusing on outcomes improves value.

Solutions for Creating a Healthy Metrics Culture

Healthy metrics cultures rely on transparency, collaboration, and continuous improvement. Teams avoid blame. Additionally, leadership supports learning.

Summary and Key Takeaways

This guide covered essential KPIs for DevOps teams, from DORA metrics to operational, quality, and business indicators. These KPIs reflect speed, stability, reliability, and value delivery. Monitoring them helps teams improve pipelines, reduce failures, and optimize performance. Furthermore, KPIs align engineering with business goals and customer expectations. Teams should balance speed with quality and review KPIs consistently. Using the right metrics empowers DevOps organizations to grow sustainably. For companies scaling engineering teams, the ability to Hire offshore software development team also supports long-term success. Ultimately, KPIs create transparency, accountability, and predictable delivery—a foundation for modern DevOps excellence.

Frequently Asked Questions

What are the top 5 KPIs every DevOps team should start with?

The top 5 KPIs include deployment frequency, lead time for changes, change failure rate, MTTR, and availability. These KPIs reflect overall delivery health. Additionally, they highlight performance across speed, quality, and stability. These metrics provide a strong foundation for evaluation.

How often should DevOps KPIs be reviewed?

Teams should review KPIs weekly, monthly, and quarterly. Frequent reviews maintain alignment. Additionally, long-term analysis improves strategic decisions.

What tools are best for tracking DevOps metrics?

Tools like Grafana, Datadog, Splunk, and Kubernetes-native tools help track DevOps Metrics effectively. Additionally, CI/CD dashboards improve visibility.

What’s the difference between a DevOps metric and a KPI?

Metrics measure values. KPIs measure goals. Additionally, KPIs align metrics with strategy.

How do I balance speed and quality when tracking KPIs?

Teams balance speed and quality by monitoring stability metrics along with delivery metrics. Additionally, automation ensures consistency.

What are common mistakes DevOps teams make when tracking KPIs?

Common mistakes include focusing on vanity metrics, ignoring root causes, and tracking too many indicators. Additionally, failing to act on insights limits progress.

Conclusion

Tracking the right kpi for DevOps helps teams improve performance, stability, and delivery quality. DevOps organizations rely on data to streamline workflows, reduce failures, and accelerate releases. Additionally, DevOps KPIs connect engineering output with business goals. Teams should review metrics regularly and refine them as systems evolve. Furthermore, a strong KPI framework encourages accountability and continuous improvement. Whether teams monitor DORA metrics or business-focused indicators, KPIs create clarity and drive success. With consistent tracking, DevOps teams optimize workflows and build resilient, scalable systems that support long-term growth.

Prepared to venture into the possibilities of tomorrow?