Chariot's first dynamic routing and Business to Government (B2G) Pilot
Chariot is an international micro transit service owned by Ford. Our team was tasked with launching Chariot's first dynamic routing service and their first public transportation agency partnership. This pilot program fills in the first / last mile gaps in the public transit system using human centered design research methodologies.
Role: User Researcher
Ford / Chariot's Goal
What does it take to partner with a public transit agency? How does our service change depending on their needs? Is it a fruitful business opportunity?
Let's pilot it to see!
Transit Agency's Goal
A transit station parking lot is at over capacity and the transportation agency needs a way to get residents there without them using their personal vehicles. Instead of building a new parking garage (which will encourage more driving and therefore congestion), and instead of using a low frequency fixed route public buses, could we offer a system that is more user-friendly and popular for riders?
Let's pilot it to see!
This was a complex project with multiple clients, vendors, and in-house teams. We had to pull all of these stakeholders together and understand the relationships between them.
First we had to list out the stakeholders and see how they interconnect.
Given the resources we have available to us, what would success look like for this project?
After talking to numerous stakeholders, we found patterns and were able to group them.
We asked ourselves, what data stories do we want to be able to tell at the end of this pilot?
Next we brainstormed how to measure these things. Which metrics are the most actionable.
Measuring happiness and trust
We referred to experts in the field to consider how to measure these aspects that are key to our success with riders.
We curated an analytics dashboard where we could track KPIs as we tested out our service.
Peer Program Case Studies
We needed to comprehend the Microtransit Landscape by studying peer programs. What overarching lessons can we learn from them? Which features were the most beneficial? What were realistic benchmark estimates of what we could hope to achieve given their experiences? And how did they collect their data?
Methods to gather case study data:
Attended case study presentations
Collectively building this service Blueprint enabled us to wrap our minds around the multiple components that define this service.
Closing Customer Feedback Loops
One of the biggest deficiencies we found in researching public transit systems is the wide gap between the decision makers, service deliverers, and riders. We made it a top priority that our users would feel that someone would be on the other end taking in their questions and concerns and addressing them. We included intercom so that riders could feel supported every step of their journey, and so we could track top complaints, and surveys to gather richer insights, and reviews to assess every ride.
Simulating our new Service
We simulated a bare-bones version of the service to test our assumptions and get feedback from users.
We contacted residents who typically drive to the transit station to try our shuttle service for one day and provide us with feedback.
Bodystorming & Improv
Here our interaction designer and I conduct a conversation where we are trying to coordinate carpooling. This helped us understand the logic flow, order and amount of information to provide to the user at each point in the converstation, what data we needed to collect to kick back an answer, and capture what the right tone should be.
Building a TextBot
I took the insights from the body storming exercise and integrated the language and logic into a textbot using TextIT and Twilio.
This enabled us to both provide the information and support necessary for the service as well as collect feedback on how the user's experience is along the way.
Wizard of Oz
Here I am interacting with some of the riders via on the back end of the text service.
I provided them with information so they could use the service which helped us prioritize features. I saw where users got confused by the questions they asked which we took as feedback to our ux. I was able to troubleshoot and provide them with customer support which shapes the script we deliver to our customer service reps.
Collecting Feedback with Flipbooks
We put some materials in front of users like wireframes for the app and user tested with them throughout the journey.
We incorporated the feedback into the app we were building and the service blueprint.
Building the Tech
Contributed to determining the feature prioritization, helped design the flow, sketched out wireframe concepts, and provided feedback on final digital wireframes. Focused feedback on usability and inclusive design principles.
I advocated for and configured Intercom, our help chat customer support system to ensure we were always able to connect with the user along the way.
Contributed to determining the feature prioritization. Provided usability and inclusive design perspective feedback.
On-Demand Dynamic Routing Algorithm
Translating how this works in the most relevant way for riders.
Crafting a Test Plan
I put together a test plan to test the interoperability of the technical components along with the added service element components and players:
On-Demand Dynamic Routing Algorithm
Intercom Customer Service
Physical waiting areas
Our team of engineers and designers
This photo shows me socializing the test plan. I created a 3D visualization to more easily convey how the test is expected to go so we can note what went wrong and well along the way.
This was also a fun opportunity for our team to come together so we infused play along the way!
Downloading and Documenting Results
We noted app issues by marking them on flip books and video recording buggy interactions.
We tracked technical bugs and plotted them by severity and frequency to gauge how we should prioritize them.
We used the same methodology to plot design issues.
These are then added to the trello task management digital board and the engineers address the bugs and the designers take into account the issues.
To ensure our tests were meeting the project KPIs we measured our tests and recorded them on our Analytics Dashboard wall.