Human OSINT for API Exploitation

My favorite exploitation pathways are those where predicting the psychology of a developer or team of developers in the process of building, pushing, or forking a chunk of code comes into play. Abusing APIs builds on top of the same mentality of performing SQL injections (SQLi); however, with most organizations now implementing WAFs and DMBS configurations, SQLi is not so prevalently found without going into bypassing mechanisms. API exploitations heavily rely on trust in that the development team took a misstep or overlooked a small detail. 


I will write this post from the perspective of performing a bug bounty, red team operation, or external penetration test, with minimal initial knowledge of the workings of the targeted company. If you desire to find a target from a list of organizations available on a bug bounty website, starting with basic OSINT on the team structures will allow you to find a company with a glaringly easier-to-penetrate attack surface.



Mapping Team Structure with OSINT

LinkedIn Reconnaissance with Google Dorking

site:linkedin.com/in ("frontend engineer" OR "react developer") "COMPANY NAME"

site:linkedin.com/in ("backend engineer" OR "api developer" OR "flask") "COMPANY NAME"


Further filter by location, job title, and department.

What we are looking for here are siloed teams that lack communication and a lack of strength in the backend and security departments. 

If you see rigid separation, such as with distinct titles as “React Developer”, “Backend API Engineer”, and “Platform Engineer”, that hints towards siloed teams. 

If you see a large number of frontend engineers compared to the number of backend engineers, that hints towards APIS that are under-maintained. 

If you see a lot of short tenures, that hints towards knowledge of the internal workings being gradually lost with insecure legacy endpoints that could be targeted.

If you see a lot of employees coming from startup backgrounds, that hints towards a habit formed of shipping fast and overlooking small bugs and overexposed APIs, as well as temporary admin bypasses being left in the production environment.

If you see cross-continent employment, such as the majority of the front-end team being based in the USA and the majority of the backend team being based in India, that hints towards a loss of communication.

If you see a lack of a formal Application Security team, that hints towards a lack of threat modeling and small security testing being overlooked, such as rate limiting and Role-Based Access Control.

Parse Job Descriptions on Careers Page

Again, we are looking for siloed teams and a lack of communication.

If you see a lack of emphasis on the need of the front-end and back-end teams to communicate, that hints towards a lack of full-stack overlap.

If you see an emphasis on moving fast and prototyping rapidly, that hints towards minimal validation and logic flaws.

If you see an emphasis on developers owning their features end-to-end, that hints towards developers deploying their own code with minimal Quality Assurance checks.

If you see an emphasis on Product Managers, that hints towards feature delivery being prioritized over security and architecture.

Mapping Security Infrastructure from OSINT

Here, we are predicting what blue-team security infrastructure an organization may have in use using OSINT. This will determine how layered an attack path may need to be, as well as how quiet you would have to be in the environment.

If you are able to figure out what the organization uses to log suspicious behavior, you can predict what gets logged, the speed at which internal teams are alerted, and the need to craft payloads that bypass default detection behaviors.

Parse Security Professional Careers Page and LinkedIn Profiles

Look through job descriptions for blue-team-based roles, such as Security Engineer, SOC Analyst, Incident Responder, and SIEM Engineer. Also, look at employees' LinkedIn pages using the same Google Dorks above to see what they list their experience as.

If you see Splunk, that hints towards centralized dashboards with structured alerting pipelines and more consistent detection rules.

If you see job ads for internal employees to build their own log pipelines, that hints towards their detection mechanisms lacking maturity.

If you see the usage of WAFs, that hints towards fast detection and the need to be stealthier in the environment.

If you see the usage of Elastic, Kibana, or ELK logging, that hints towards logging being self-hosted and not well alerted on.

If you see the usage of native cloud-based SIEMS, that hints towards a lack of vision on app-layer attacks.

If you see older SIEMS such as ArcSight or LogRhythm, that hints towards security not being up to date and over-alerted SOC analysts.

Disgruntled Employee Complaints

Using career websites such as Glassdoor can help you in figuring out what is internally broken with company culture, processes, tooling, and team dynamics. The information you glean from prior employee opinions can both widen and deepen your understanding of the attack surface you are working with.

Go to Glassdoor and filter down reviews to engineering/IT/technical departments.

If you see complaints of no formal onboarding or employees being expected to figure things out on their own, that hints towards a lack of standardization with feature development as well as security processes.

If you see juniors complaining of a lack of mentorship, that hints towards unsafe code being pushed to production.

If you see complaints of everything being rushed, that hints towards bugs and doors being left in the code.

If you see complaints of security being an afterthought, that hints towards a lack of security checks.

If you see complaints of legacy code, that hints towards vulnerable endpoints, old authentication schemes, and session-based flows being left in production.

If you see complaints of a lack of communication, that hints towards front-end and back-end teams blindly trusting each other.

If you see recent complaints of a migration process, that hints towards a hybrid infrastructure currently being in place, with attack paths visible from misconfiguration.

If you see complaints of nothing being done until something breaks, that hints towards immaturity in logging and alerting.

If you see complaints of the security team being forced into mainly focusing on compliance versus “having fun”, this hints towards the usage of check-the-box security frameworks such as SOC 2 and PCI-DSS being the extent of the security configurations in place.

If you see complaints of security logs being scattered across a large number of tools, that hints towards a fragmented log pipeline with delays and gaps in detection.

__________

This post does not touch on the vast number of teams that touch API development. However, the mindset can be transferred over to all teams to get a good idea of what type of organization you are working with.

For Product Management, you want to get an understanding of their prioritization in delivery versus security hardening, as they own which APIs get built, what features are publicly exposed, and when they go live.

For DevOps/DevSecOps/Platform Engineering, you want to get an understanding of how prevalent they are at the organization, as they own the CI/CD pipeline.

For QA and Test Engineering, you want to get an understanding of how resourced they are, as they own the testing process for APIs.

You can also broaden your attack surface research beyond the technical team, as a lot of organizations have internal tools and dashboards with endpoints that are rushed and under-tested. 

Information may be leaked by a marketing or legal employee who is not aware of the ability of their endpoints to be externally accessed. Security teams are often completely unaware of what non-technical teams are exposing. 

Sales teams may desire easy logins for a clean user experience during demos. This will push towards more whitelisted IPs and bypasses hardcoded.

Customer Support teams who use elevated APIs to reset passwords, impersonate users, or access customer or internal employee records may have weak role checks.

Marketing and Third-Party Integration teams may leak credentials and API keys for testing integrations.

Executives are a fun area to tap into because their permissions bypass that of every other employee at the company. You may find that they have access to sensitive dashboards, elevated permissions, fewer restrictions, custom views, and employees delegated to perform actions on their behalf. Due to their vast number of responsibilities, their need for things to “just work” is higher. Due to their seniority, their requests often get passed.

You may find that executive logins bypass MFA restrictions on every login for certain accounts or login portals. There could be hidden login portals for investors, the board, and shareholders. They may have mobile dashboards to access on the go that don’t follow the same API security protocols as mass-used desktop APIs. They could have dashboards that aggregate admin access across several domains, without granular role-based access controls set in place. They could have assistants that access certain elements on their behalf, leading to shared credentials and token reuse. They could have long sessions on their portals due to not wanting to be logged out every day, leading to tokens that never expire and persistent cookies.

What I write here is more of a mindset to internalize versus a checklist to follow. With every step in the process of abusing and exploiting APIs, it is a matter of using the knowledge accumulated from the dynamics of the humans behind the organization to come to be able to make attack path decisions based on the psychology of who implemented what, who touched what, who is where, and where they may have faltered.