Oracle INC

At Oracle, I design customer-facing experiences that translate complex energy data into clear, motivating insights for millions of utility customers.

Manager

Karina Van Schaardenburg

Role

Product Designer II

Year

Aug 2022 - Current

PROJECT OVERVIEW

Redesigning the BDR & PTR Communication System Across the Peak Event Lifecycle

Redesigning the BDR & PTR Communication System Across the Peak Event Lifecycle

Peak events occur during periods of unusually high electricity demand, when utilities ask customers to temporarily reduce energy use to protect grid reliability and reduce the risk of outages.

BDR programs rely on behavioral motivation without monetary incentives, while PTR programs reward customers with bill credits for reducing energy use during peak events. Designing across both required careful framing of feedback, comparisons, and recognition to maintain trust and long-term engagement.

I led the end-to-end design for this work, synthesizing prior research, defining testable hypotheses, driving concept exploration, and partnering with research and data science to validate outcomes.

Rather than documenting specific UX activities, this case study focuses on the decisions and tradeoffs that shaped the final system.

Peak events occur during periods of unusually high electricity demand, when utilities ask customers to temporarily reduce energy use to protect grid reliability and reduce the risk of outages.

BDR programs rely on behavioral motivation without monetary incentives, while PTR programs reward customers with bill credits for reducing energy use during peak events. Designing across both required careful framing of feedback, comparisons, and recognition to maintain trust and long-term engagement.

I led the end-to-end design for this work, synthesizing prior research, defining testable hypotheses, driving concept exploration, and partnering with research and data science to validate outcomes.

Rather than documenting specific UX activities, this case study focuses on the decisions and tradeoffs that shaped the final system.

Product Manager

Alex Zhou

Project Role

Lead Product Designer

Timeline

6 weeks

PROBLEM STATEMENT

How might we redesign performance feedback to sustain trust and participation across repeated peak events?

How might we redesign performance feedback to sustain trust and participation across repeated peak events?

BDR and PTR programs depend on repeated customer participation across many peak events over time. However, the existing main-insight that showed user performance relied heavily on rank-based comparisons that many customers misunderstood, perceived as unfair, or found demotivating.

When customers could not clearly understand why they performed a certain way or whether their effort meaningfully mattered: trust eroded and motivation to participate in future events declined. This risked undermining both behavioral programs (BDR) and incentive-based programs (PTR), despite their different reward models.

The challenge was to redesign performance feedback to feel fair, transparent, and motivating while operating within real-world data constraints and scaling across millions of customers.

The BDR & PTR emails emphasized rank-based outcomes without sufficient context, explanation, or acknowledgment of effort. While these comparisons were technically accurate, they often failed to reflect real-world factors influencing energy use and left many customers uncertain about what their performance actually meant.

In parallel, the email experience had not evolved alongside newer design systems or behavioral science guidance, making it harder to clearly communicate intent, reinforce trust, or motivate continued participation at scale.

RESEARCH SYNTHESIS

The insights that unlocked the redesign

The insights that unlocked the redesign

Across both BDR & PTR emails, customers struggled not with performance itself, but with understanding what their results meant and whether their effort was recognized. This insight builds on foundational UX research conducted by Kate Roberts, which I synthesized and applied to the redesign.

69%

69%

misinterpreted rank-based comparisons as arbitrary

58%

58%

of customers lacked motivation without understanding impact

43%

43%

customers wanted stronger recognition for reducing use

These findings pointed to a need to move beyond ranking outcomes and toward feedback that clearly explained effort, context, and impact.

EXPLORATIONS

Exploring how different performance frames affect motivation and clarity

Exploring how different performance frames affect motivation and clarity

Guided by prior research, we explored a wide range of ways to frame the post-event “main insight,” iterating through multiple rounds of concepting across different performance lenses.

From these explorations, we narrowed to three distinct framing strategies that best represented the tradeoffs we wanted to test. Each variation embodied a clear hypothesis about how customers interpret performance, effort, and comparison — and was selected for quantitative validation.

Variation A — 3 Bar Comparison

HYPOTHESIS

Showing energy use alongside efficient benchmarks would encourage improvement without relying on rank. This approach was inspired by the 3-bar comparison, a proven main insight already used in our most successful product line, the Home Energy Report (HER).

Variation B — Relative Rank

HYPOTHESIS

An improved rank-based comparison could preserve competitive motivation while reducing confusion through clearer context and localized benchmarks.

Variation C — Population Context Grid

HYPOTHESIS

Visualizing performance within a broader distribution would help customers understand relative impact without invoking shame or loss framing.

Together, these explorations revealed clear tradeoffs between competitive motivation, interpretability, and perceived fairness.

While some approaches improved clarity at the cost of motivation, others preserved motivation but continued to confuse users. These tradeoffs shaped which direction we moved forward to validate quantitatively. Some approaches improved clarity, they risked reducing motivation; others preserved motivation but continued to confuse users. These learnings directly informed which direction we validated quantitatively next.

VALIDATION

Confirming a direction through quantitative validation

Confirming a direction through quantitative validation

To avoid optimizing for novelty, we validated the leading concepts against the existing rank-based control. Variants were evaluated via controlled A/B tests across live post-event emails, measuring downstream energy savings, perceived clarity, and intent to participate in future events.

While an updated rank-based comparison performed better than the legacy version, it continued to introduce confusion around why performance differed. In contrast, the 3-Bar Comparison consistently improved interpretability without reducing motivation.

Decision

  • Adopt the 3-Bar Comparison as the primary BDR/PTR main insight

  • Extend the chosen direction with recognition, impact framing, and seasonal feedback

This decision balanced behavioral effectiveness with clarity, trust, and scalability across millions of customers.

BEHAVIORAL INCENTIVE

Designing behavioral logic across BDR & PTR

Designing behavioral logic across BDR & PTR

Peak-day programs are not just about reporting results, they are behavioral systems.

Across BDR and PTR, I designed two distinct reinforcement architectures based on program economics, client sensitivity, and behavioral science principles. While both products use comparative feedback, their motivational strategies differ intentionally.

BDR required performance-sensitive social reinforcement.
PTR required earnings validation without social pressure.

Quota-driven social comparison with injunctive norms

BDR is a quota-based program. Utilities only receive payment when customers collectively meet performance thresholds. Because savings outcomes directly impact revenue, the feedback system needed to reliably activate behavioral change.

To drive stronger response, I introduced tiered injunctive norms layered onto the 3-Bar Comparison model. Event-level feedback used graded performance signals (e.g., emoji or leaf progression) to communicate relative standing, while season-level reinforcement rewarded consistent top-tier savings with a performance badge.

However, social signaling required calibration.

Some utilities reacted negatively to the legacy emoji-based norm, citing customer complaints about perceived judgment. To address this, I:

  • Redesigned the lowest emoji state to feel reflective rather than punitive

  • Introduced neutral alternatives (leaf progression and geometric badges)

  • Allowed client-level flexibility while preserving behavioral intent

This resulted in a scalable norm system that balanced:

  • Behavioral effectiveness

  • Client comfort

  • Emotional tone

  • Program economics

Earnings validation without social judgment

PTR operates differently compared to BDR.

It is not quota-based, and utilities do not rely on comparative savings performance to trigger payment. In this context, injunctive norms risked introducing unnecessary social pressure.

Instead of competitive comparison, I designed a reinforcement system focused on earnings validation.

At the event level:

  • A dynamic piggy bank state confirmed whether a customer earned a reward

  • No comparative ranking was emphasized

  • Feedback remained informational and encouraging

At the season level:

  • Customers who consistently earned incentives received a performance badge

  • Recognition was tied to participation and savings, not relative superiority

This approach maintained motivation while respecting client concerns around social comparison and fairness.

SOLUTIONS

A cohesive peak-day experience across the season

A cohesive peak-day experience across the season

The validated 3-Bar Comparison became the foundation of a broader peak-day experience designed to guide customers before and after events.

Rather than treating peak days as isolated moments, we focused on reducing friction at each step — balancing timing, clarity, and motivation while maintaining trust across millions of households.

Event reminder email with calendar integration

DESIGN INTENT
  • Peak-day success depends on timing, not just intention

  • Calendar integration reduces reliance on memory and increases follow-through

  • Clear time windows lower cognitive load and prevent last-minute confusion

Post-event main insight (3-Bar Comparison)

DESIGN INTENT
  • Preserves motivation without introducing shame or rank anxiety

  • Makes performance differences immediately interpretable

  • Reinforces trust by grounding feedback in familiar benchmarks

“Why this matters” module

DESIGN INTENT

To support long-term engagement, we paired performance feedback with lightweight education, in turn helping customers understand why peak-day participation matters beyond a single event.

  • Preserves motivation without introducing shame or rank anxiety

  • Makes performance differences immediately interpretable

  • Reinforces trust by grounding feedback in familiar benchmarks

"Impact" modules with population-adaptive framing

DESIGN INTENT

Impact was framed using metaphors that resonated with different customer populations.

For climate-motivated audiences, savings were translated into environmental impact (e.g., trees grown).
For more pragmatically motivated audiences, impact was expressed through everyday analogies (e.g., smartphones charged).

This allowed the same underlying savings to feel relevant across diverse values without fragmenting the core experience.

OUTCOMES

This work shipped as part of Oracle’s peak management communications and is now used by utility partners reaching millions of households per season.

This work shipped as part of Oracle’s peak management communications and is now used by utility partners reaching millions of households per season.

Within the first few months of rollout:

  • The validated 3-Bar Comparison outperformed the legacy rank-based insight in ~95% of peak events, measured by energy savings outcomes

  • Clients adopted the new insight as the default across both BDR and PTR programs

  • The design became a foundation for expanding the peak-day experience across before, and after event touch points

Beyond performance lift, this work helped shift internal decision-making toward clear hypotheses, measurable outcomes, and behavioral evidence, even in cases where historical data was incomplete or noisy.

REFLECTIONS

This project reshaped how I think about performance feedback at scale.

This project reshaped how I think about performance feedback at scale.

I learned that clarity is not neutral. The way performance is framed can either motivate participation or quietly erode trust. Even when rank-based comparisons improved performance metrics, qualitative feedback revealed lingering doubt around why results differed, particularly in cases where customers suspected vacation homes or atypical usage patterns.

Designing for millions of households required balancing:

  • Behavioral effectiveness

  • Perceived fairness

  • Interpretability under real-world skepticism

Leading this work also pushed me into a more cross-functional role: initiating quantitative testing, aligning with analytics and delivery teams, and translating behavioral science into patterns engineers could reliably implement.

Most importantly, I learned how to advocate for design decisions using evidence, not taste and how to move complex systems forward without sacrificing customer trust.

Let's talk

Time for me:

Email:

remmysharma1107@gmail.com

Reach out:

Let's talk

Time for me:

Email:

remmysharma1107@gmail.com

Reach out: