Document version: 1.0.2 (pdf)
Date: June 29, 2011
Project Coordinators:
Document Editor:
The 2011 CWE/SANS Top 25 Most Dangerous Software Errors is a list of the most widespread and critical errors that can lead to serious vulnerabilities in software. They are often easy to find, and easy to exploit. They are dangerous because they will frequently allow attackers to completely take over the software, steal data, or prevent the software from working at all.
The Top 25 list is a tool for education and awareness to help programmers to prevent the kinds of vulnerabilities that plague the software industry, by identifying and avoiding all-too-common mistakes that occur before software is even shipped. Software customers can use the same list to help them to ask for more secure software. Researchers in software security can use the Top 25 to focus on a narrow but important subset of all known security weaknesses. Finally, software managers and CIOs can use the Top 25 list as a measuring stick of progress in their efforts to secure their software.
The list is the result of collaboration between the SANS Institute, MITRE, and many top software security experts in the US and Europe. It leverages experiences in the development of the SANS Top 20 attack vectors (http://www.sans.org/top20/) and MITRE's Common Weakness Enumeration (CWE) (http://cwe.mitre.org/). MITRE maintains the CWE web site, with the support of the US Department of Homeland Security's National Cyber Security Division, presenting detailed descriptions of the top 25 programming errors along with authoritative guidance for mitigating and avoiding them. The CWE site contains data on more than 800 programming errors, design errors, and architecture errors that can lead to exploitable vulnerabilities.
The 2011 Top 25 makes improvements to the 2010 list, but the spirit and goals remain the same. This year's Top 25 entries are prioritized using inputs from over 20 different organizations, who evaluated each weakness based on prevalence, importance, and likelihood of exploit. It uses the Common Weakness Scoring System (CWSS) to score and rank the final results. The Top 25 list covers a small set of the most effective "Monster Mitigations," which help developers to reduce or eliminate entire groups of the Top 25 weaknesses, as well as many of the hundreds of weaknesses that are documented by CWE.
Here is some guidance for different types of users of the Top 25.
User Activity Programmers new to security Read the brief listing, then examine the Monster Mitigations section to see how a small number of changes in your practices can have a big impact on the Top 25.Pick a small number of weaknesses to work with first, and see the Detailed CWE Descriptions for more information on the weakness, which includes code examples and specific mitigations.
Programmers who are experienced in security Use the general Top 25 as a checklist of reminders, and note the issues that have only recently become more common. Consult the See the On the Cusp page for other weaknesses that did not make the final Top 25; this includes weaknesses that are only starting to grow in prevalence or importance.If you are already familiar with a particular weakness, then consult the Detailed CWE Descriptions and see the "Related CWEs" links for variants that you may not have fully considered.
Build your own Monster Mitigations section so that you have a clear understanding of which of your own mitigation practices are the most effective - and where your gaps may lie.
Consider building a custom "Top n" list that fits your needs and practices. Consult the Common Weakness Risk Analysis Framework (CWRAF) page for a general framework for building top-N lists, and see Appendix C for a description of how it was done for this year's Top 25. Develop your own nominee list of weaknesses, with your own prevalence and importance factors - and other factors that you may wish - then build a metric and compare the results with your colleagues, which may produce some fruitful discussions.
Software project managers Treat the Top 25 as an early step in a larger effort towards achieving software security. Strategic possibilities are covered in efforts such as Building Security In Maturity Model (BSIMM), SAFECode, OpenSAMM, Microsoft SDL, and OWASP ASVS.Examine the Monster Mitigations section to determine which approaches may be most suitable to adopt, or establish your own monster mitigations and map out which of the Top 25 are addressed by them.
Consider building a custom "Top n" list that fits your needs and practices. Consult the Common Weakness Risk Analysis Framework (CWRAF) page for a general framework for building top-N lists, and see Appendix C for a description of how it was done for this year's Top 25. Develop your own nominee list of weaknesses, with your own prevalence and importance factors - and other factors that you may wish - then build a metric and compare the results with your colleagues, which may produce some fruitful discussions.
Software TestersRead the brief listing and consider how you would integrate knowledge of these weaknesses into your tests. If you are in a friendly competition with the developers, you may find some surprises in the On the Cusp entries, or even the rest of CWE.
For each indvidual CWE entry in the Details section, you can get more information on detection methods from the "technical details" link. Review the CAPEC IDs for ideas on the types of attacks that can be launched against the weakness.
Software customers Recognize that market pressures often drive vendors to provide software that is rich in features, and security may not be a serious consideration. As a customer, you have the power to influence vendors to provide more secure products by letting them know that security is important to you. Use the Top 25 to help set minimum expectations for due care by software vendors. Consider using the Top 25 as part of contract language during the software acquisition process. The SANS Application Security Procurement Language site offers customer-centric language that is derived from the OWASP Secure Software Contract Annex, which offers a "framework for discussing expectations and negotiating responsibilities" between the customer and the vendor. Other information is available from the DHS Acquisition and Outsourcing Working Group.Consult the Common Weakness Risk Analysis Framework (CWRAF) page for a general framework for building a top-N list that suits your own needs.
For the software products that you use, pay close attention to publicly reported vulnerabilities in those products. See if they reflect any of the associated weaknesses on the Top 25 (or your own custom list), and if so, contact your vendor to determine what processes the vendor is undertaking to minimize the risk that these weaknesses will continue to be introduced into the code.
See the On the Cusp summary for other weaknesses that did not make the final Top 25; this will include weaknesses that are only starting to grow in prevalence or importance, so they may become your problem in the future.
Educators Start with the brief listing. Some training materials are also available. Users of the 2010 Top 25See the What Changed section; while a lot has changed on the surface, this year's effort is more well-structured.
This is a brief listing of the Top 25 items, using the general ranking.
NOTE: 16 other weaknesses were considered for inclusion in the Top 25, but their general scores were not high enough. They are listed in a separate "On the Cusp" page.
Rank Score ID Name [1] 93.8 CWE-89 Improper Neutralization of Special Elements used in an SQL Command ('SQL Injection') [2] 83.3 CWE-78 Improper Neutralization of Special Elements used in an OS Command ('OS Command Injection') [3] 79.0 CWE-120 Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') [4] 77.7 CWE-79 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') [5] 76.9 CWE-306 Missing Authentication for Critical Function [6] 76.8 CWE-862 Missing Authorization [7] 75.0 CWE-798 Use of Hard-coded Credentials [8] 75.0 CWE-311 Missing Encryption of Sensitive Data [9] 74.0 CWE-434 Unrestricted Upload of File with Dangerous Type [10] 73.8 CWE-807 Reliance on Untrusted Inputs in a Security Decision [11] 73.1 CWE-250 Execution with Unnecessary Privileges [12] 70.1 CWE-352 Cross-Site Request Forgery (CSRF) [13] 69.3 CWE-22 Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal') [14] 68.5 CWE-494 Download of Code Without Integrity Check [15] 67.8 CWE-863 Incorrect Authorization [16] 66.0 CWE-829 Inclusion of Functionality from Untrusted Control Sphere [17] 65.5 CWE-732 Incorrect Permission Assignment for Critical Resource [18] 64.6 CWE-676 Use of Potentially Dangerous Function [19] 64.1 CWE-327 Use of a Broken or Risky Cryptographic Algorithm [20] 62.4 CWE-131 Incorrect Calculation of Buffer Size [21] 61.5 CWE-307 Improper Restriction of Excessive Authentication Attempts [22] 61.1 CWE-601 URL Redirection to Untrusted Site ('Open Redirect') [23] 61.0 CWE-134 Uncontrolled Format String [24] 60.3 CWE-190 Integer Overflow or Wraparound [25] 59.9 CWE-759 Use of a One-Way Hash without a SaltCWE-89 - SQL injection - delivers the knockout punch of security weaknesses in 2011. For data-rich software applications, SQL injection is the means to steal the keys to the kingdom. CWE-78, OS command injection, is where the application interacts with the operating system. The classic buffer overflow (CWE-120) comes in third, still pernicious after all these decades. Cross-site scripting (CWE-79) is the bane of web applications everywhere. Rounding out the top 5 is Missing Authentication (CWE-306) for critical functionality.
This section sorts the entries into the three high-level categories that were used in the 2009 Top 25:
These weaknesses are related to insecure ways in which data is sent and received between separate components, modules, programs, processes, threads, or systems.
For each weakness, its ranking in the general list is provided in square brackets.
Rank CWE ID Name [1] CWE-89 Improper Neutralization of Special Elements used in an SQL Command ('SQL Injection') [2] CWE-78 Improper Neutralization of Special Elements used in an OS Command ('OS Command Injection') [4] CWE-79 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') [9] CWE-434 Unrestricted Upload of File with Dangerous Type [12] CWE-352 Cross-Site Request Forgery (CSRF) [22] CWE-601 URL Redirection to Untrusted Site ('Open Redirect')The weaknesses in this category are related to ways in which software does not properly manage the creation, usage, transfer, or destruction of important system resources.
Rank CWE ID Name [3] CWE-120 Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') [13] CWE-22 Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal') [14] CWE-494 Download of Code Without Integrity Check [16] CWE-829 Inclusion of Functionality from Untrusted Control Sphere [18] CWE-676 Use of Potentially Dangerous Function [20] CWE-131 Incorrect Calculation of Buffer Size [23] CWE-134 Uncontrolled Format String [24] CWE-190 Integer Overflow or WraparoundThe weaknesses in this category are related to defensive techniques that are often misused, abused, or just plain ignored.
Rank CWE ID Name [5] CWE-306 Missing Authentication for Critical Function [6] CWE-862 Missing Authorization [7] CWE-798 Use of Hard-coded Credentials [8] CWE-311 Missing Encryption of Sensitive Data [10] CWE-807 Reliance on Untrusted Inputs in a Security Decision [11] CWE-250 Execution with Unnecessary Privileges [15] CWE-863 Incorrect Authorization [17] CWE-732 Incorrect Permission Assignment for Critical Resource [19] CWE-327 Use of a Broken or Risky Cryptographic Algorithm [21] CWE-307 Improper Restriction of Excessive Authentication Attempts [25] CWE-759 Use of a One-Way Hash without a Salt
For each individual weakness entry, additional information is provided. The primary audience is intended to be software programmers and designers.
Ranking The ranking of the weakness in the general list. Score Summary A summary of the individual ratings and scores that were given to this weakness, including Prevalence, Importance, and Adjusted Score. CWE ID and name CWE identifier and short name of the weakness Supporting Information Supplementary information about the weakness that may be useful for decision-makers to further prioritize the entries. Discussion Short, informal discussion of the nature of the weakness and its consequences. The discussion avoids digging too deeply into technical detail. Prevention and Mitigations Steps that developers can take to mitigate or eliminate the weakness. Developers may choose one or more of these mitigations to fit their own needs. Note that the effectiveness of these techniques vary, and multiple techniques may be combined for greater defense-in-depth. Related CWEs Other CWE entries that are related to the Top 25 weakness. Note: This list is illustrative, not comprehensive. General Parent One or more pointers to more general CWE entries, so you can see the breadth and depth of the problem. Related Attack Patterns CAPEC entries for attacks that may be successfully conducted against the weakness. Note: the list is not necessarily complete. Other pointers Links to more details including source code examples that demonstrate the weakness, methods for detection, etc.Each Top 25 entry includes supporting data fields for weakness prevalence, technical impact, and other information. Each entry also includes the following data fields.
Field Description Attack Frequency How often the weakness occurs in vulnerabilities that are exploited by an attacker. Ease of Detection How easy it is for an attacker to find this weakness. Remediation Cost The amount of effort required to fix the weakness. Attacker Awareness The likelihood that an attacker is going to be aware of this particular weakness, methods for detection, and methods for exploitation.See Appendix A for more details.
This section provides details for each individual CWE entry, along with links to additional information. See the Organization of the Top 25 section for an explanation of the various fields.
These days, it seems as if software is all about the data: getting it into the database, pulling it from the database, massaging it into information, and sending it elsewhere for fun and profit. If attackers can influence the SQL that you use to communicate with your database, then suddenly all your fun and profit belongs to them. If you use SQL queries in security controls such as authentication, attackers could alter the logic of those queries to bypass security. They could modify the queries to steal, corrupt, or otherwise change your underlying data. They'll even steal data one byte at a time if they have to, and they have the patience and know-how to do so. In 2011, SQL injection was responsible for the compromises of many high-profile organizations, including Sony Pictures, PBS, MySQL.com, security company HBGary Federal, and many others.
Technical Details | Code Examples | Detection Methods | References
For example, consider using persistence layers such as Hibernate or Enterprise Java Beans, which can provide significant protection against SQL injection if used properly.
Architecture and DesignProcess SQL queries using prepared statements, parameterized queries, or stored procedures. These features should accept parameters or variables and support strong typing. Do not dynamically construct and execute query strings within these features using "exec" or similar functionality, since you may re-introduce the possibility of SQL injection.
Architecture and Design, OperationSpecifically, follow the principle of least privilege when creating user accounts to a SQL database. The database users should only have the minimum privileges necessary to use their account. If the requirements of the system indicate that a user can read and modify their own data, then limit their privileges so they cannot read/write others' data. Use the strictest permissions possible on all database objects, such as execute-only for stored procedures.
Architecture and DesignInstead of building your own implementation, such features may be available in the database or programming language. For example, the Oracle DBMS_ASSERT package can check or enforce that parameters have certain properties that make them less vulnerable to SQL injection. For MySQL, the mysql_real_escape_string() API function is available in both C and PHP.
ImplementationWhen performing input validation, consider all potentially relevant properties, including length, type of input, the full range of acceptable values, missing or extra inputs, syntax, consistency across related fields, and conformance to business rules. As an example of business rule logic, "boat" may be syntactically valid because it only contains alphanumeric characters, but it is not valid if you are expecting colors such as "red" or "blue."
When constructing SQL query strings, use stringent whitelists that limit the character set based on the expected value of the parameter in the request. This will indirectly limit the scope of an attack, but this technique is less important than proper output encoding and escaping.
Note that proper output encoding, escaping, and quoting is the most effective solution for preventing SQL injection, although input validation may provide some defense-in-depth. This is because it effectively limits what will appear in output. Input validation will not always prevent SQL injection, especially if you are required to support free-form text fields that could contain arbitrary characters. For example, the name "O'Reilly" would likely pass the validation step, since it is a common last name in the English language. However, it cannot be directly inserted into the database because it contains the "'" apostrophe character, which would need to be escaped or otherwise handled. In this case, stripping the apostrophe might reduce the risk of SQL injection, but it would produce incorrect behavior because the wrong name would be recorded.
When feasible, it may be safest to disallow meta-characters entirely, instead of escaping them. This will provide some defense in depth. After the data is entered into the database, later processes may neglect to escape meta-characters before use, and you may not have control over those processes.
Architecture and DesignIf errors must be tracked in some detail, capture them in log messages - but consider what could occur if the log messages can be viewed by attackers. Avoid recording highly sensitive information such as passwords in any form. Avoid inconsistent messaging that might accidentally tip off an attacker about internal state, such as whether a username is valid or not.
In the context of SQL Injection, error messages revealing the structure of a SQL query can help attackers tailor successful attack strings.
OperationEffectiveness: Moderate
Notes: An application firewall might not cover all possible input vectors. In addition, attack techniques might be available to bypass the protection mechanism, such as using malformed inputs that can still be processed by the component that receives those inputs. Depending on functionality, an application firewall might inadvertently reject or modify legitimate requests. Finally, some manual effort may be required for customization.
Operation, Implementation
Your software is often the bridge between an outsider on the network and the internals of your operating system. When you invoke another program on the operating system, but you allow untrusted inputs to be fed into the command string that you generate for executing that program, then you are inviting attackers to cross that bridge into a land of riches by executing their own commands instead of yours.
Technical Details | Code Examples | Detection Methods | References
OS-level examples include the Unix chroot jail, AppArmor, and SELinux. In general, managed code may provide some protection. For example, java.io.FilePermission in the Java SecurityManager allows you to specify restrictions on file operations.
This may not be a feasible solution, and it only limits the impact to the operating system; the rest of your application may still be subject to compromise.
Be careful to avoid CWE-243 and other weaknesses related to jails.
Effectiveness: Limited
Notes: The effectiveness of this mitigation depends on the prevention capabilities of the specific sandbox or jail being used and might only help to reduce the scope of an attack, such as restricting the attacker to certain system calls or limiting the portion of the file system that can be accessed.
Architecture and DesignFor example, consider using the ESAPI Encoding control or a similar tool, library, or framework. These will help the programmer encode outputs in a manner less prone to error.
ImplementationSome languages offer multiple functions that can be used to invoke commands. Where possible, identify any function that invokes a command shell using a single string, and replace it with a function that requires individual arguments. These functions typically perform appropriate quoting and filtering of arguments. For example, in C, the system() function accepts a string that contains the entire command to be executed, whereas execl(), execve(), and others require an array of strings, one for each argument. In Windows, CreateProcess() only accepts one command at a time. In Perl, if system() is provided with an array of arguments, then it will quote each of the arguments.
ImplementationWhen performing input validation, consider all potentially relevant properties, including length, type of input, the full range of acceptable values, missing or extra inputs, syntax, consistency across related fields, and conformance to business rules. As an example of business rule logic, "boat" may be syntactically valid because it only contains alphanumeric characters, but it is not valid if you are expecting colors such as "red" or "blue."
When constructing OS command strings, use stringent whitelists that limit the character set based on the expected value of the parameter in the request. This will indirectly limit the scope of an attack, but this technique is less important than proper output encoding and escaping.
Note that proper output encoding, escaping, and quoting is the most effective solution for preventing OS command injection, although input validation may provide some defense-in-depth. This is because it effectively limits what will appear in output. Input validation will not always prevent OS command injection, especially if you are required to support free-form text fields that could contain arbitrary characters. For example, when invoking a mail program, you might need to allow the subject field to contain otherwise-dangerous inputs like ";" and ">" characters, which would need to be escaped or otherwise handled. In this case, stripping the character might reduce the risk of OS command injection, but it would produce incorrect behavior because the subject field would not be recorded as the user intended. This might seem to be a minor inconvenience, but it could be more important when the program relies on well-structured subject lines in order to pass messages to other components.
Even if you make a mistake in your validation (such as forgetting one out of 100 input fields), appropriate encoding is still likely to protect you from injection-based attacks. As long as it is not done in isolation, input validation is still a useful technique, since it may significantly reduce your attack surface, allow you to detect some attacks, and provide other security benefits that proper encoding does not address.
Architecture and DesignIf errors must be tracked in some detail, capture them in log messages - but consider what could occur if the log messages can be viewed by attackers. Avoid recording highly sensitive information such as passwords in any form. Avoid inconsistent messaging that might accidentally tip off an attacker about internal state, such as whether a username is valid or not.
In the context of OS Command Injection, error information passed back to the user might reveal whether an OS command is being executed and possibly which command is being used.
OperationEffectiveness: Moderate
Notes: An application firewall might not cover all possible input vectors. In addition, attack techniques might be available to bypass the protection mechanism, such as using malformed inputs that can still be processed by the component that receives those inputs. Depending on functionality, an application firewall might inadvertently reject or modify legitimate requests. Finally, some manual effort may be required for customization.
Architecture and Design, Operation
Buffer overflows are Mother Nature's little reminder of that law of physics that says: if you try to put more stuff into a container than it can hold, you're going to make a mess. The scourge of C applications for decades, buffer overflows have been remarkably resistant to elimination. However, copying an untrusted input without checking the size of that input is the simplest error to make in a time when there are much more interesting mistakes to avoid. That's why this type of buffer overflow is often referred to as "classic." It's decades old, and it's typically one of the first things you learn about in Secure Programming 101.
Technical Details | Code Examples | Detection Methods | References
For example, many languages that perform their own memory management, such as Java and Perl, are not subject to buffer overflows. Other languages, such as Ada and C#, typically provide overflow protection, but the protection can be disabled by the programmer.
Be wary that a language's interface to native code may still be subject to overflows, even if the language itself is theoretically safe.
Architecture and DesignExamples include the Safe C String Library (SafeStr) by Messier and Viega, and the Strsafe.h library from Microsoft. These libraries provide safer versions of overflow-prone string-handling functions.
Notes: This is not a complete solution, since many buffer overflows are not related to strings.
Build and CompilationFor example, certain compilers and extensions provide automatic buffer overflow detection mechanisms that are built into the compiled code. Examples include the Microsoft Visual Studio /GS flag, Fedora/Red Hat FORTIFY_SOURCE GCC flag, StackGuard, and ProPolice.
Effectiveness: Defense in Depth
Notes: This is not necessarily a complete solution, since these mechanisms can only detect certain types of overflows. In addition, an attack could still cause a denial of service, since the typical response is to exit the application.
ImplementationDouble check that your buffer is as large as you specify.
When using functions that accept a number of bytes to copy, such as strncpy(), be aware that if the destination buffer size is equal to the source buffer size, it may not NULL-terminate the string.
Check buffer boundaries if accessing the buffer in a loop and make sure you are not in danger of writing past the allocated space.
If necessary, truncate all input strings to a reasonable length before passing them to the copy and concatenation functions.
ImplementationWhen performing input validation, consider all potentially relevant properties, including length, type of input, the full range of acceptable values, missing or extra inputs, syntax, consistency across related fields, and conformance to business rules. As an example of business rule logic, "boat" may be syntactically valid because it only contains alphanumeric characters, but it is not valid if you are expecting colors such as "red" or "blue."
Architecture and DesignEffectiveness: Defense in Depth
Notes: This is not a complete solution. However, it forces the attacker to guess an unknown value that changes every program execution. In addition, an attack could still cause a denial of service, since the typical response is to exit the application.
OperationEffectiveness: Defense in Depth
Notes: This is not a complete solution, since buffer overflows could be used to overwrite nearby variables to modify the software's state in dangerous ways. In addition, it cannot be used in cases in which self-modifying code is required. Finally, an attack could still cause a denial of service, since the typical response is to exit the application.
Build and Compilation, OperationEffectiveness: Moderate
Notes: This approach is still susceptible to calculation errors, including issues such as off-by-one errors (CWE-193) and incorrectly calculating buffer lengths (CWE-131).
Architecture and DesignOS-level examples include the Unix chroot jail, AppArmor, and SELinux. In general, managed code may provide some protection. For example, java.io.FilePermission in the Java SecurityManager allows you to specify restrictions on file operations.
This may not be a feasible solution, and it only limits the impact to the operating system; the rest of your application may still be subject to compromise.
Be careful to avoid CWE-243 and other weaknesses related to jails.
Effectiveness: Limited
Notes: The effectiveness of this mitigation depends on the prevention capabilities of the specific sandbox or jail being used and might only help to reduce the scope of an attack, such as restricting the attacker to certain system calls or limiting the portion of the file system that can be accessed.
CAPEC-IDs: [view all]
8, 9, 10, 14, 24, 42, 44, 45, 46, 47, 67, 92, 100
Cross-site scripting (XSS) is one of the most prevalent, obstinate, and dangerous vulnerabilities in web applications. It's pretty much inevitable when you combine the stateless nature of HTTP, the mixture of data and script in HTML, lots of data passing between web sites, diverse encoding schemes, and feature-rich web browsers. If you're not careful, attackers can inject Javascript or other browser-executable content into a web page that your application generates. Your web page is then accessed by other users, whose browsers execute that malicious script as if it came from you (because, after all, it *did* come from you). Suddenly, your web site is serving code that you didn't write. The attacker can use a variety of techniques to get the input directly into your server, or use an unwitting victim as the middle man in a technical version of the "why do you keep hitting yourself?" game.
Technical Details | Code Examples | Detection Methods | References
Examples of libraries and frameworks that make it easier to generate properly encoded output include Microsoft's Anti-XSS library, the OWASP ESAPI Encoding module, and Apache Wicket.
Implementation, Architecture and DesignFor any data that will be output to another web page, especially any data that was received from external inputs, use the appropriate encoding on all non-alphanumeric characters.
Parts of the same output document may require different encodings, which will vary depending on whether the output is in the:
HTML body
Element attributes (such as src="XYZ")
URIs
JavaScript sections
Cascading Style Sheets and style property
etc. Note that HTML Entity Encoding is only appropriate for the HTML body.
Consult the XSS Prevention Cheat Sheet [REF-16] for more details on the types of encoding and escaping that are needed.
Architecture and Design, ImplementationEffectiveness: Limited
Notes: This technique has limited effectiveness, but can be helpful when it is possible to store client state and sensitive information on the server side instead of in cookies, headers, hidden form fields, etc.
Architecture and DesignEffectiveness: Defense in Depth
ImplementationWhen performing input validation, consider all potentially relevant properties, including length, type of input, the full range of acceptable values, missing or extra inputs, syntax, consistency across related fields, and conformance to business rules. As an example of business rule logic, "boat" may be syntactically valid because it only contains alphanumeric characters, but it is not valid if you are expecting colors such as "red" or "blue."
When dynamically constructing web pages, use stringent whitelists that limit the character set based on the expected value of the parameter in the request. All input should be validated and cleansed, not just parameters that the user is supposed to specify, but all data in the request, including hidden fields, cookies, headers, the URL itself, and so forth. A common mistake that leads to continuing XSS vulnerabilities is to validate only fields that are expected to be redisplayed by the site. It is common to see data from the request that is reflected by the application server or the application that the development team did not anticipate. Also, a field that is not currently reflected may be used by a future developer. Therefore, validating ALL parts of the HTTP request is recommended.
Note that proper output encoding, escaping, and quoting is the most effective solution for preventing XSS, although input validation may provide some defense-in-depth. This is because it effectively limits what will appear in output. Input validation will not always prevent XSS, especially if you are required to support free-form text fields that could contain arbitrary characters. For example, in a chat application, the heart emoticon ("<3") would likely pass the validation step, since it is commonly used. However, it cannot be directly inserted into the web page because it contains the "<" character, which would need to be escaped or otherwise handled. In this case, stripping the "<" might reduce the risk of XSS, but it would produce incorrect behavior because the emoticon would not be recorded. This might seem to be a minor inconvenience, but it would be more important in a mathematical forum that wants to represent inequalities.
Even if you make a mistake in your validation (such as forgetting one out of 100 input fields), appropriate encoding is still likely to protect you from injection-based attacks. As long as it is not done in isolation, input validation is still a useful technique, since it may significantly reduce your attack surface, allow you to detect some attacks, and provide other security benefits that proper encoding does not address.
Ensure that you perform input validation at well-defined interfaces within the application. This will help protect the application even if a component is reused or moved elsewhere.
Architecture and DesignEffectiveness: Moderate
Notes: An application firewall might not cover all possible input vectors. In addition, attack techniques might be available to bypass the protection mechanism, such as using malformed inputs that can still be processed by the component that receives those inputs. Depending on functionality, an application firewall might inadvertently reject or modify legitimate requests. Finally, some manual effort may be required for customization.
Operation, Implementation
CAPEC-IDs: [view all]
18, 19, 32, 63, 85, 86, 91, 106, 198, 199, 209, 232, 243, 244, 245, 246, 247
In countless action movies, the villain breaks into a high-security building by crawling through heating ducts or pipes, scaling elevator shafts, or hiding under a moving cart. This works because the pathway into the building doesn't have all those nosy security guards asking for identification. Software may expose certain critical functionality with the assumption that nobody would think of trying to do anything but break in through the front door. But attackers know how to case a joint and figure out alternate ways of getting into a system.
Technical Details | Code Examples | Detection Methods | References
Identify all potential communication channels, or other means of interaction with the software, to ensure that all channels are appropriately protected. Developers sometimes perform authentication at the primary channel, but open up a secondary channel that is assumed to be private. For example, a login mechanism may be listening on one network port, but after successful authentication, it may open up a second port where it waits for the connection, but avoids authentication because it assumes that only the authenticated party will connect to the port.
In general, if the software or protocol allows a single session or user state to persist across multiple connections or channels, authentication and appropriate credential management need to be used throughout.
Architecture and DesignIn environments such as the World Wide Web, the line between authentication and authorization is sometimes blurred. If custom authentication routines are required instead of those provided by the server, then these routines must be applied to every single page, since these pages could be requested directly.
Architecture and DesignFor example, consider using libraries with authentication capabilities such as OpenSSL or the ESAPI Authenticator.
Suppose you're hosting a house party for a few close friends and their guests. You invite everyone into your living room, but while you're catching up with one of your friends, one of the guests raids your fridge, peeks into your medicine cabinet, and ponders what you've hidden in the nightstand next to your bed. Software faces similar authorization problems that could lead to more dire consequences. If you don't ensure that your software's users are only doing what they're allowed to, then attackers will try to exploit your improper authorization and exercise unauthorized functionality that you only intended for restricted users. In May 2011, Citigroup revealed that it had been compromised by hackers who were able to steal details of hundreds of thousands of bank accounts by changing the account information that was present in fields in the URL; authorization would check that the user had the rights to access the account being specified. Earlier, a similar missing-authorization attack was used to steal private information of iPad owners from an AT&T site.
Technical Details | Code Examples | Detection Methods | References
Note that this approach may not protect against horizontal authorization, i.e., it will not protect a user from attacking others with the same role.
Architecture and DesignFor example, consider using authorization frameworks such as the JAAS Authorization Framework and the OWASP ESAPI Access Control feature.
Architecture and DesignOne way to do this is to ensure that all pages containing sensitive information are not cached, and that all such pages restrict access to requests that are accompanied by an active and authenticated session token associated with a user who has the required permissions to access that page.
System Configuration, Installation
Hard-coding a secret password or cryptograpic key into your program is bad manners, even though it makes it extremely convenient - for skilled reverse engineers. While it might shrink your testing and support budgets, it can reduce the security of your customers to dust. If the password is the same across all your software, then every customer becomes vulnerable if (rather, when) your password becomes known. Because it's hard-coded, it's usually a huge pain for sysadmins to fix. And you know how much they love inconvenience at 2 AM when their network's being hacked - about as much as you'll love responding to hordes of angry customers and reams of bad press if your little secret should get out. Most of the CWE Top 25 can be explained away as an honest mistake; for this issue, though, many customers won't see it that way. The high-profile Stuxnet worm, which caused operational problems in an Iranian nuclear site, used hard-coded credentials in order to spread. Another way that hard-coded credentials arise is through unencrypted or obfuscated storage in a configuration file, registry key, or other location that is only intended to be accessible to an administrator. While this is much more polite than burying it in a binary program where it can't be modified, it becomes a Bad Idea to expose this file to outsiders through lax permissions or other means.
Technical Details | Code Examples | Detection Methods | References
In Windows environments, the Encrypted File System (EFS) may provide some protection.
Architecture and DesignUse randomly assigned salts for each separate hash that you generate. This increases the amount of computation that an attacker needs to conduct a brute-force attack, possibly limiting the effectiveness of the rainbow table method.
Architecture and DesignThe first suggestion involves the use of generated passwords or keys that are changed automatically and must be entered at given time intervals by a system administrator. These passwords will be held in memory and only be valid for the time intervals.
Next, the passwords or keys should be limited at the back end to only performing actions valid for the front end, as opposed to having full access.
Finally, the messages sent should be tagged and checksummed with time sensitive values so as to prevent replay-style attacks.
Whenever sensitive data is being stored or transmitted anywhere outside of your control, attackers may be looking for ways to get to it. Thieves could be anywhere - sniffing your packets, reading your databases, and sifting through your file systems. If your software sends sensitive information across a network, such as private data or authentication credentials, that information crosses many different nodes in transit to its final destination. Attackers can sniff this data right off the wire, and it doesn't require a lot of effort. All they need to do is control one node along the path to the final destination, control any node within the same networks of those transit nodes, or plug into an available interface. If your software stores sensitive information on a local file or database, there may be other ways for attackers to get at the file. They may benefit from lax permissions, exploitation of another vulnerability, or physical theft of the disk. You know those massive credit card thefts you keep hearing about? Many of them are due to unencrypted storage. In 2011, many breaches of customer emails and passwords made the attacker's job easier by storing critical information without any encryption. Once the attacker got access to the database, it was game over. In June 2011, the LulzSec group grabbed headlines by grabbing and publishing unencrypted data.
Technical Details | Code Examples | Detection Methods | References
Encryption that is needed to store or transmit private data of the users of the system
Encryption that is needed to protect the system itself from unauthorized disclosure or tampering
Identify the separate needs and contexts for encryption:
One-way (i.e., only the user or recipient needs to have the key). This can be achieved using public key cryptography, or other techniques in which the encrypting party (i.e., the software) does not need to have access to a private key.
Two-way (i.e., the encryption can be automatically performed on behalf of a user, but the key must be available so that the plaintext can be automatically recoverable by that user). This requires storage of the private key in a format that is recoverable only by the user (or perhaps by the operating system) in a way that cannot be recovered by others.
Architecture and DesignFor example, US government systems require FIPS 140-2 certification.
Do not develop your own cryptographic algorithms. They will likely be exposed to attacks that are well-understood by cryptographers. Reverse engineering techniques are mature. If your algorithm can be compromised if attackers find out how it works, then it is especially weak.
Periodically ensure that you aren't using obsolete cryptography. Some older algorithms, once thought to require a billion years of computing time, can now be broken in days or hours. This includes MD4, MD5, SHA1, DES, and other algorithms that were once regarded as strong.
Architecture and DesignEffectiveness: Defense in Depth
Notes: This makes it easier to spot places in the code where data is being used that is unencrypted.
CAPEC-IDs: [view all]
31, 37, 65, 117, 155, 157, 167, 204, 205, 258, 259, 260, 383, 384, 385, 386, 387, 388, 389
You may think you're allowing uploads of innocent images (rather, images that won't damage your system - the Interweb's not so innocent in some places). But the name of the uploaded file could contain a dangerous extension such as .php instead of .gif, or other information (such as content type) may cause your server to treat the image like a big honkin' program. So, instead of seeing the latest paparazzi shot of your favorite Hollywood celebrity in a compromising position, you'll be the one whose server gets compromised.
Technical Details | Code Examples | Detection Methods | References
When performing input validation, consider all potentially relevant properties, including length, type of input, the full range of acceptable values, missing or extra inputs, syntax, consistency across related fields, and conformance to business rules. As an example of business rule logic, "boat" may be syntactically valid because it only contains alphanumeric characters, but it is not valid if you are expecting colors such as "red" or "blue."
For example, limiting filenames to alphanumeric characters can help to restrict the introduction of unintended file extensions.
Architecture and DesignOS-level examples include the Unix chroot jail, AppArmor, and SELinux. In general, managed code may provide some protection. For example, java.io.FilePermission in the Java SecurityManager allows you to specify restrictions on file operations.
This may not be a feasible solution, and it only limits the impact to the operating system; the rest of your application may still be subject to compromise.
Be careful to avoid CWE-243 and other weaknesses related to jails.
Effectiveness: Limited
Notes: The effectiveness of this mitigation depends on the prevention capabilities of the specific sandbox or jail being used and might only help to reduce the scope of an attack, such as restricting the attacker to certain system calls or limiting the portion of the file system that can be accessed.
None.
CAPEC-IDs: [view all]
1, 122
In countries where there is a minimum age for purchasing alcohol, the bartender is typically expected to verify the purchaser's age by checking a driver's license or other legally acceptable proof of age. But if somebody looks old enough to drink, then the bartender may skip checking the license altogether. This is a good thing for underage customers who happen to look older. Driver's licenses may require close scrutiny to identify fake licenses, or to determine if a person is using someone else's license. Software developers often rely on untrusted inputs in the same way, and when these inputs are used to decide whether to grant access to restricted resources, trouble is just around the corner.
Technical Details | Code Examples | Detection Methods | References
Ensure that the system definitively and unambiguously keeps track of its own state and user state and has rules defined for legitimate state transitions. Do not allow any application user to affect state directly in any way other than through legitimate actions leading to state transitions.
If information must be stored on the client, do not do so without encryption and integrity checking, or otherwise having a mechanism on the server side to catch tampering. Use a message authentication code (MAC) algorithm, such as Hash Message Authentication Code (HMAC). Apply this against the state or sensitive data that you have to expose, which can guarantee the integrity of the data - i.e., that the data has not been modified. Ensure that you use an algorithm with a strong hash function (CWE-328).
Architecture and DesignWith a stateless protocol such as HTTP, use a framework that maintains the state for you.
Examples include ASP.NET View State and the OWASP ESAPI Session Management feature.
Be careful of language features that provide state support, since these might be provided as a convenience to the programmer and may not be considering security.
Architecture and DesignIdentify all inputs that are used for security decisions and determine if you can modify the design so that you do not have to rely on submitted inputs at all. For example, you may be able to keep critical information about the user's session on the server side instead of recording it within external data.
None.
CAPEC-IDs: [view all]
232
Spider Man, the well-known comic superhero, lives by the motto "With great power comes great responsibility." Your software may need special privileges to perform certain operations, but wielding those privileges longer than necessary can be extremely risky. When running with extra privileges, your application has access to resources that the application's user can't directly reach. For example, you might intentionally launch a separate program, and that program allows its user to specify a file to open; this feature is frequently present in help utilities or editors. The user can access unauthorized files through the launched program, thanks to those extra privileges. Command execution can happen in a similar fashion. Even if you don't launch other programs, additional vulnerabilities in your software could have more serious consequences than if it were running at a lower privilege level.
Technical Details | Code Examples | Detection Methods | References
CAPEC-IDs: [view all]
69, 104
You know better than to accept a package from a stranger at the airport. It could contain dangerous contents. Plus, if anything goes wrong, then it's going to look as if you did it, because you're the one with the package when you board the plane. Cross-site request forgery is like that strange package, except the attacker tricks a user into activating a request that goes to your site. Thanks to scripting and the way the web works in general, the user might not even be aware that the request is being sent. But once the request gets to your server, it looks as if it came from the user, not the attacker. This might not seem like a big deal, but the attacker has essentially masqueraded as a legitimate user and gained all the potential access that the user has. This is especially handy when the user has administrator privileges, resulting in a complete compromise of your application's functionality. When combined with XSS, the result can be extensive and devastating. If you've heard about XSS worms that stampede through very large web sites in a matter of minutes (like Facebook), there's usually CSRF feeding them.
Technical Details | Code Examples | Detection Methods | References
For example, use anti-CSRF packages such as the OWASP CSRFGuard.
Another example is the ESAPI Session Management control, which includes a component for CSRF.
ImplementationNotes: Note that this can be bypassed using XSS (CWE-79).
Architecture and DesignNotes: Note that this can be bypassed using XSS (CWE-79).
Architecture and DesignThis technique requires Javascript, so it may not work for browsers that have Javascript disabled.
Notes: Note that this can probably be bypassed using XSS (CWE-79).
Architecture and DesignNotes: Note that this can be bypassed using XSS (CWE-79). An attacker could use XSS to generate a spoofed Referer, or to generate a malicious request from a page whose Referer would be allowed.
CAPEC-IDs: [view all]
62, 111
While data is often exchanged using files, sometimes you don't intend to expose every file on your system while doing so. When you use an outsider's input while constructing a filename, the resulting path could point outside of the intended directory. An attacker could combine multiple ".." or similar sequences to cause the operating system to navigate out of the restricted directory, and into the rest of the system.
Technical Details | Code Examples | Detection Methods | References
When performing input validation, consider all potentially relevant properties, including length, type of input, the full range of acceptable values, missing or extra inputs, syntax, consistency across related fields, and conformance to business rules. As an example of business rule logic, "boat" may be syntactically valid because it only contains alphanumeric characters, but it is not valid if the input is only expected to contain colors such as "red" or "blue."
Do not rely exclusively on looking for malicious or malformed inputs (i.e., do not rely on a blacklist). A blacklist is likely to miss at least one undesirable input, especially if the code's environment changes. This can give attackers enough room to bypass the intended validation. However, blacklists can be useful for detecting potential attacks or determining which inputs are so malformed that they should be rejected outright.
When validating filenames, use stringent whitelists that limit the character set to be used. If feasible, only allow a single "." character in the filename to avoid weaknesses such as CWE-23, and exclude directory separators such as "/" to avoid CWE-36. Use a whitelist of allowable file extensions, which will help to avoid CWE-434.
Do not rely exclusively on a filtering mechanism that removes potentially dangerous characters. This is equivalent to a blacklist, which may be incomplete (CWE-184). For example, filtering "/" is insufficient protection if the filesystem also supports the use of "\" as a directory separator. Another possible error could occur when the filtering is applied in a way that still produces dangerous data (CWE-182). For example, if "../" sequences are removed from the ".../...//" string in a sequential fashion, two instances of "../" would be removed from the original string, but the remaining characters would still form the "../" string.
Architecture and DesignUse a built-in path canonicalization function (such as realpath() in C) that produces the canonical version of the pathname, which effectively removes ".." sequences and symbolic links (CWE-23, CWE-59). This includes:
realpath() in C
getCanonicalPath() in Java
GetFullPath() in ASP.NET
realpath() or abs_path() in Perl
realpath() in PHP
Architecture and DesignEffectiveness: Moderate
Notes: An application firewall might not cover all possible input vectors. In addition, attack techniques might be available to bypass the protection mechanism, such as using malformed inputs that can still be processed by the component that receives those inputs. Depending on functionality, an application firewall might inadvertently reject or modify legitimate requests. Finally, some manual effort may be required for customization.
Architecture and Design, OperationFor example, ID 1 could map to "inbox.txt" and ID 2 could map to "profile.txt". Features such as the ESAPI AccessReferenceMap provide this capability.
Architecture and Design, OperationOS-level examples include the Unix chroot jail, AppArmor, and SELinux. In general, managed code may provide some protection. For example, java.io.FilePermission in the Java SecurityManager allows you to specify restrictions on file operations.
This may not be a feasible solution, and it only limits the impact to the operating system; the rest of your application may still be subject to compromise.
Be careful to avoid CWE-243 and other weaknesses related to jails.
Effectiveness: Limited
Notes: The effectiveness of this mitigation depends on the prevention capabilities of the specific sandbox or jail being used and might only help to reduce the scope of an attack, such as restricting the attacker to certain system calls or limiting the portion of the file system that can be accessed.
Architecture and Design, OperationThis significantly reduces the chance of an attacker being able to bypass any protection mechanisms that are in the base program but not in the include files. It will also reduce your attack surface.
ImplementationIf errors must be tracked in some detail, capture them in log messages - but consider what could occur if the log messages can be viewed by attackers. Avoid recording highly sensitive information such as passwords in any form. Avoid inconsistent messaging that might accidentally tip off an attacker about internal state, such as whether a username is valid or not.
In the context of path traversal, error messages which disclose path information can help attackers craft the appropriate attack strings to move through the file system hierarchy.
Operation, ImplementationNone.
You don't need to be a guru to realize that if you download code and execute it, you're trusting that the source of that code isn't malicious. Maybe you only access a download site that you trust, but attackers can perform all sorts of tricks to modify that code before it reaches you. They can hack the download site, impersonate it with DNS spoofing or cache poisoning, convince the system to redirect to a different site, or even modify the code in transit as it crosses the network. This scenario even applies to cases in which your own product downloads and installs its own updates. When this happens, your software will wind up running code that it doesn't expect, which is bad for you but great for attackers.
Technical Details | Code Examples | Detection Methods | References
Notes: This is only a partial solution since it will not prevent your code from being modified on the hosting site or in transit.
Architecture and Design, OperationThis will only be a partial solution, since it will not detect DNS spoofing and it will not prevent your code from being modified on the hosting site.
Architecture and DesignSpeficially, it may be helpful to use tools or frameworks to perform integrity checking on the transmitted code.
If you are providing the code that is to be downloaded, such as for automatic updates of your software, then use cryptographic signatures for your code and modify your download clients to verify the signatures. Ensure that your implementation does not contain CWE-295, CWE-320, CWE-347, and related weaknesses.
Use code signing technologies such as Authenticode. See references.
Architecture and Design, OperationOS-level examples include the Unix chroot jail, AppArmor, and SELinux. In general, managed code may provide some protection. For example, java.io.FilePermission in the Java SecurityManager allows you to specify restrictions on file operations.
This may not be a feasible solution, and it only limits the impact to the operating system; the rest of your application may still be subject to compromise.
Be careful to avoid CWE-243 and other weaknesses related to jails.
Effectiveness: Limited
Notes: The effectiveness of this mitigation depends on the prevention capabilities of the specific sandbox or jail being used and might only help to reduce the scope of an attack, such as restricting the attacker to certain system calls or limiting the portion of the file system that can be accessed.
CAPEC-IDs: [view all]
184, 185, 186, 187
While the lack of authorization is more dangerous (see elsewhere in the Top 25), incorrect authorization can be just as problematic. Developers may attempt to control access to certain resources, but implement it in a way that can be bypassed. For example, once a person has logged in to a web application, the developer may store the permissions in a cookie. By modifying the cookie, the attacker can access other resources. Alternately, the developer might perform authorization by delivering code that gets executed in the web client, but an attacker could use a customized client that removes the check entirely.
Technical Details | Code Examples | Detection Methods | References
Note that this approach may not protect against horizontal authorization, i.e., it will not protect a user from attacking others with the same role.
Architecture and DesignFor example, consider using authorization frameworks such as the JAAS Authorization Framework and the OWASP ESAPI Access Control feature.
Architecture and DesignOne way to do this is to ensure that all pages containing sensitive information are not cached, and that all such pages restrict access to requests that are accompanied by an active and authenticated session token associated with a user who has the required permissions to access that page.
System Configuration, Installation
The idea seems simple enough (not to mention cool enough): you can make a lot of smaller parts of a document (or program), then combine them all together into one big document (or program) by "including" or "requiring" those smaller pieces. This is a common enough way to build programs. Combine this with the common tendency to allow attackers to influence the location of some of these pieces - perhaps even from the attacker's own server - then suddenly you're importing somebody else's code. In these Web 2.0 days, maybe it's just "the way the Web works," but not if security is a consideration.
Technical Details | Code Examples | Detection Methods | References
For example, ID 1 could map to "inbox.txt" and ID 2 could map to "profile.txt". Features such as the ESAPI AccessReferenceMap provide this capability.
Architecture and DesignOS-level examples include the Unix chroot jail, AppArmor, and SELinux. In general, managed code may provide some protection. For example, java.io.FilePermission in the Java SecurityManager allows you to specify restrictions on file operations.
This may not be a feasible solution, and it only limits the impact to the operating system; the rest of your application may still be subject to compromise.
Be careful to avoid CWE-243 and other weaknesses related to jails.
Effectiveness: Limited
Notes: The effectiveness of this mitigation depends on the prevention capabilities of the specific sandbox or jail being used and might only help to reduce the scope of an attack, such as restricting the attacker to certain system calls or limiting the portion of the file system that can be accessed.
Architecture and Design, OperationWhen performing input validation, consider all potentially relevant properties, including length, type of input, the full range of acceptable values, missing or extra inputs, syntax, consistency across related fields, and conformance to business rules. As an example of business rule logic, "boat" may be syntactically valid because it only contains alphanumeric characters, but it is not valid if you are expecting colors such as "red" or "blue."
For filenames, use stringent whitelists that limit the character set to be used. If feasible, only allow a single "." character in the filename to avoid weaknesses such as CWE-23, and exclude directory separators such as "/" to avoid CWE-36. Use a whitelist of allowable file extensions, which will help to avoid CWE-434.
Architecture and Design, OperationThis significantly reduces the chance of an attacker being able to bypass any protection mechanisms that are in the base program but not in the include files. It will also reduce your attack surface.
Architecture and Design, ImplementationMany file inclusion problems occur because the programmer assumed that certain inputs could not be modified, especially for cookies and URL components.
OperationEffectiveness: Moderate
Notes: An application firewall might not cover all possible input vectors. In addition, attack techniques might be available to bypass the protection mechanism, such as using malformed inputs that can still be processed by the component that receives those inputs. Depending on functionality, an application firewall might inadvertently reject or modify legitimate requests. Finally, some manual effort may be required for customization.
CAPEC-IDs: [view all]
35, 38, 101, 103, 111, 175, 181, 184, 185, 186, 187, 193, 222, 251, 252, 253
It's rude to take something without asking permission first, but impolite users (i.e., attackers) are willing to spend a little time to see what they can get away with. If you have critical programs, data stores, or configuration files with permissions that make your resources readable or writable by the world - well, that's just what they'll become. While this issue might not be considered during implementation or design, sometimes that's where the solution needs to be applied. Leaving it up to a harried sysadmin to notice and make the appropriate changes is far from optimal, and sometimes impossible.
Technical Details | Code Examples | Detection Methods | References
Effectiveness: Moderate
Notes: This can be an effective strategy. However, in practice, it may be difficult or time consuming to define these areas when there are many different resources or user types, or if the applications features change rapidly.
Architecture and Design, OperationOS-level examples include the Unix chroot jail, AppArmor, and SELinux. In general, managed code may provide some protection. For example, java.io.FilePermission in the Java SecurityManager allows you to specify restrictions on file operations.
This may not be a feasible solution, and it only limits the impact to the operating system; the rest of your application may still be subject to compromise.
Be careful to avoid CWE-243 and other weaknesses related to jails.
Effectiveness: Moderate
Notes: The effectiveness of this mitigation depends on the prevention capabilities of the specific sandbox or jail being used and might only help to reduce the scope of an attack, such as restricting the attacker to certain system calls or limiting the portion of the file system that can be accessed.
Implementation, InstallationEffectiveness: High
System ConfigurationEffectiveness: High
Documentation
Safety is critical when handling power tools. The programmer's toolbox is chock full of power tools, including library or API functions that make assumptions about how they will be used, with no guarantees of safety if they are abused. If potentially-dangerous functions are not used properly, then things can get real messy real quick.
Technical Details | Code Examples | Detection Methods | References
CAPEC-IDs: [view all]
If you are handling sensitive data or you need to protect a communication channel, you may be using cryptography to prevent attackers from reading it. You may be tempted to develop your own encryption scheme in the hopes of making it difficult for attackers to crack. This kind of grow-your-own cryptography is a welcome sight to attackers. Cryptography is just plain hard. If brilliant mathematicians and computer scientists worldwide can't get it right (and they're always breaking their own stuff), then neither can you. You might think you created a brand-new algorithm that nobody will figure out, but it's more likely that you're reinventing a wheel that falls off just before the parade is about to start.
Technical Details | Code Examples | Detection Methods | References
For example, US government systems require FIPS 140-2 certification.
Do not develop your own cryptographic algorithms. They will likely be exposed to attacks that are well-understood by cryptographers. Reverse engineering techniques are mature. If your algorithm can be compromised if attackers find out how it works, then it is especially weak.
Periodically ensure that you aren't using obsolete cryptography. Some older algorithms, once thought to require a billion years of computing time, can now be broken in days or hours. This includes MD4, MD5, SHA1, DES, and other algorithms that were once regarded as strong.
Architecture and DesignIndustry-standard implementations will save you development time and may be more likely to avoid errors that can occur during implementation of cryptographic algorithms. Consider the ESAPI Encryption feature.
Implementation, Architecture and Design
CAPEC-IDs: [view all]
20, 97
In languages such as C, where memory management is the programmer's responsibility, there are many opportunities for error. If the programmer does not properly calculate the size of a buffer, then the buffer may be too small to contain the data that the programmer plans to write - even if the input was properly validated. Any number of problems could produce the incorrect calculation, but when all is said and done, you're going to run head-first into the dreaded buffer overflow.
Technical Details | Code Examples | Detection Methods | References
Also be careful to account for 32-bit, 64-bit, and other potential differences that may affect the numeric representation.
ImplementationEffectiveness: Moderate
Notes: This approach is still susceptible to calculation errors, including issues such as off-by-one errors (CWE-193) and incorrectly calculating buffer lengths (CWE-131).
Additionally, this only addresses potential overflow issues. Resource consumption / exhaustion issues are still possible.
ImplementationUse libraries or frameworks that make it easier to handle numbers without unexpected consequences, or buffer allocation routines that automatically track buffer size.
Examples include safe integer handling packages such as SafeInt (C++) or IntegerLib (C or C++).
Build and CompilationFor example, certain compilers and extensions provide automatic buffer overflow detection mechanisms that are built into the compiled code. Examples include the Microsoft Visual Studio /GS flag, Fedora/Red Hat FORTIFY_SOURCE GCC flag, StackGuard, and ProPolice.
Effectiveness: Defense in Depth
Notes: This is not necessarily a complete solution, since these mechanisms can only detect certain types of overflows. In addition, an attack could still cause a denial of service, since the typical response is to exit the application.
OperationEffectiveness: Defense in Depth
Notes: This is not a complete solution. However, it forces the attacker to guess an unknown value that changes every program execution. In addition, an attack could still cause a denial of service, since the typical response is to exit the application.
OperationEffectiveness: Defense in Depth
Notes: This is not a complete solution, since buffer overflows could be used to overwrite nearby variables to modify the software's state in dangerous ways. In addition, it cannot be used in cases in which self-modifying code is required. Finally, an attack could still cause a denial of service, since the typical response is to exit the application.
ImplementationOS-level examples include the Unix chroot jail, AppArmor, and SELinux. In general, managed code may provide some protection. For example, java.io.FilePermission in the Java SecurityManager allows you to specify restrictions on file operations.
This may not be a feasible solution, and it only limits the impact to the operating system; the rest of your application may still be subject to compromise.
Be careful to avoid CWE-243 and other weaknesses related to jails.
Effectiveness: Limited
Notes: The effectiveness of this mitigation depends on the prevention capabilities of the specific sandbox or jail being used and might only help to reduce the scope of an attack, such as restricting the attacker to certain system calls or limiting the portion of the file system that can be accessed.
CAPEC-IDs: [view all]
47, 100
An often-used phrase is "If at first you don't succeed, try, try again." Attackers may try to break into your account by writing programs that repeatedly guess different passwords. Without some kind of protection against brute force techniques, the attack will eventually succeed. You don't have to be advanced to be persistent.
Technical Details | Code Examples | Detection Methods | References
Disconnecting the user after a small number of failed attempts
Implementing a timeout
Locking out a targeted account
Requiring a computational task on the user's part.
Architecture and DesignConsider using libraries with authentication capabilities such as OpenSSL or the ESAPI Authenticator.
While much of the power of the World Wide Web is in sharing and following links between web sites, typically there is an assumption that a user should be able to click on a link or perform some other action before being sent to a different web site. Many web applications have implemented redirect features that allow attackers to specify an arbitrary URL to link to, and the web client does this automatically. This may be another of those features that are "just the way the web works," but if left unchecked, it could be useful to attackers in a couple important ways. First, the victim could be autoamtically redirected to a malicious site that tries to attack the victim through the web browser. Alternately, a phishing attack could be conducted, which tricks victims into visiting malicious sites that are posing as legitimate sites. Either way, an uncontrolled redirect will send your users someplace that they don't want to go.
Technical Details | Code Examples | Detection Methods | References
When performing input validation, consider all potentially relevant properties, including length, type of input, the full range of acceptable values, missing or extra inputs, syntax, consistency across related fields, and conformance to business rules. As an example of business rule logic, "boat" may be syntactically valid because it only contains alphanumeric characters, but it is not valid if you are expecting colors such as "red" or "blue."
Use a whitelist of approved URLs or domains to be used for redirection.
Architecture and DesignFor example, ID 1 could map to "/login.asp" and ID 2 could map to "http://www.example.com/". Features such as the ESAPI AccessReferenceMap provide this capability.
Architecture and Design, ImplementationMany open redirect problems occur because the programmer assumed that certain inputs could not be modified, such as cookies and hidden form fields.
OperationEffectiveness: Moderate
Notes: An application firewall might not cover all possible input vectors. In addition, attack techniques might be available to bypass the protection mechanism, such as using malformed inputs that can still be processed by the component that receives those inputs. Depending on functionality, an application firewall might inadvertently reject or modify legitimate requests. Finally, some manual effort may be required for customization.
None.
CAPEC-IDs: [view all]
194
The mantra is that successful relationships depend on communicating clearly, and this applies to software, too. Format strings are often used to send or receive well-formed data. By controlling a format string, the attacker can control the input or output in unexpected ways - sometimes, even, to execute code.
Technical Details | Code Examples | Detection Methods | References
None.
CAPEC-IDs: [view all]
67
In the real world, 255+1=256. But to a computer program, sometimes 255+1=0, or 0-1=65535, or maybe 40,000+40,000=14464. You don't have to be a math whiz to smell something fishy. Actually, this kind of behavior has been going on for decades, and there's a perfectly rational and incredibly boring explanation. Ultimately, it's buried deep in the DNA of computers, who can't count to infinity even if it sometimes feels like they take that long to complete an important task. When programmers forget that computers don't do math like people, bad things ensue - anywhere from crashes, faulty price calculations, infinite loops, and execution of code.
Technical Details | Code Examples | Detection Methods | References
If possible, choose a language or compiler that performs automatic bounds checking.
Architecture and DesignUse libraries or frameworks that make it easier to handle numbers without unexpected consequences.
Examples include safe integer handling packages such as SafeInt (C++) or IntegerLib (C or C++).
ImplementationUse unsigned integers where possible. This makes it easier to perform sanity checks for integer overflows. If you must use signed integers, make sure that your range check includes minimum values as well as maximum values.
ImplementationAlso be careful to account for 32-bit, 64-bit, and other potential differences that may affect the numeric representation.
Architecture and Design
CAPEC-IDs: [view all]
92
Salt might not be good for your diet, but it can be good for your password security. Instead of storing passwords in plain text, a common practice is to apply a one-way hash, which effectively randomizes the output and can make it more difficult if (or when?) attackers gain access to your password database. If you don't add a little salt to your hash, then the health of your application is in danger.
Technical Details | Code Examples | Detection Methods | References
These mitigations will be effective in eliminating or reducing the severity of the Top 25. These mitigations will also address many weaknesses that are not even on the Top 25. If you adopt these mitigations, you are well on your way to making more secure software.
A Monster Mitigation Matrix is also available to show how these mitigations apply to weaknesses in the Top 25.
ID Description M1 Establish and maintain control over all of your inputs. M2 Establish and maintain control over all of your outputs. M3 Lock down your environment. M4 Assume that external components can be subverted, and your code can be read by anyone. M5 Use industry-accepted security features instead of inventing your own. GP1 (general) Use libraries and frameworks that make it easier to avoid introducing weaknesses. GP2 (general) Integrate security into the entire software development lifecycle. GP3 (general) Use a broad mix of methods to comprehensively find and prevent weaknesses. GP4 (general) Allow locked-down clients to interact with your software.See the Monster Mitigation Matrix that maps these mitigations to Top 25 weaknesses.
Entries on the 2011 Top 25 were selected using three primary criteria: weakness prevalence, importance, and likelihood of exploit.
Prevalence is effectively an average of values that were provided by voting contributors to the 2010 Top 25 list. This reflects the voter's assessment of how often the issue is encountered in their environment. For example, software vendors evaluated prevalence relative to their own software; consultants evaluated prevalence based on their experience in evaluating other people's software.
Acceptable ratings were:
Widespread This weakness is encountered more frequently than almost all other weaknesses. Note: for selection on the general list, the "Widespread" rating could not be used more than 4 times. High This weakness is encountered very often, but it is not widespread. Common This weakness is encountered periodically. Limited This weakness is encountered rarely, or never.Importance is effectively an average of values that were provided by voting contributors to the 2011 Top 25 list. This reflects the voter's assessment of how important the issue is in their environment.
Ratings for Importance were:
Critical This weakness is more important than any other weakness, or it is one of the most important. It should be addressed as quickly as possible, and might require dedicating resources that would normally be assigned to other tasks. (Example: a buffer overflow might receive a Critical rating in unmanaged code because of the possibility of code execution.) Note: for selection on the general list, the "Critical" rating could not be used more than 4 times. High This weakness should be addressed as quickly as possible, but it is less important than the most critical weaknesses. (Example: in some threat models, an error message information leak may be given high importance because it can simplify many other attacks.) Medium This weakness should be addressed, but only after High and Critical level weaknesses have been addressed. Low It is not urgent to address the weakness, or it is not important at all.Each listed CWE entry also includes several additional fields, whose values are defined below.
When this weakness occurs in software to form a vulnerability, what are the typical consequences of exploiting it?
Code execution an attacker can execute code or commands Data loss an attacker can steal, modify, or corrupt sensitive data Denial of service an attacker can cause the software to fail or slow down, preventing legitimate users from being able to use it Security bypass an attacker can bypass a security protection mechanism; the consequences vary depending on what the mechanism is intended to protectHow often does this weakness occur in vulnerabilities that are targeted by a skilled, determined attacker?
Consider an "exposed host" which is either: an Internet-facing server, an Internet-using client, a multi-user system with untrusted users, or a multi-tiered system that crosses organizational or trust boundaries. Also consider that a skilled, determined attacker can combine attacks on multiple systems in order to reach a target host.
Often an exposed host is likely to see this attack on a daily basis. Sometimes an exposed host is likely to see this attack more than once a month. Rarely an exposed host is likely to see this attack less often than once a month.How easy is it for the skilled, determined attacker to find this weakness, whether using black-box or white-box methods, manual or automated?
Easy automated tools or techniques exist for detecting this weakness, or it can be found quickly using simple manipulations (such as typing "