In the broadest sense, a software vulnerability is a flaw that allows the vulnerable system to perform unplanned actions. Examples of the results of these unplanned actions include, sensitive information disclosure (example), denial of service (DOS) (example), authentication bypass (example), and most dangerously, full takeover of a system (aka RCE) (example) by a malicious attacker.
According to ENISA (European Union Agency for Network and Information Security) a vulnerability is, “The existence of a weakness, design, or implementation error that can lead to an unexpected, undesirable event” (reference).
According to NICCS (National Initiative for Cybersecurity Careers & Studies) a vulnerability is, “Characteristic of location or security posture or of design, security procedures, internal controls, or the implementation of any of these that permit a threat or hazard to occur” (reference).
The definition I use defines a vulnerability as, “an unintended “feature” (bug) that leads to unintended functionality.” With this definition, any bug is a vulnerability. However a bug is only a relevant vulnerability if it has security consequences.
A bug in a website that displays the wrong color text is not a relevant vulnerability because wrong colors don’t risk anything (except aesthetics). On the other hand, a bug in a website that doesn’t limit login attempts is a vulnerability since this can be used to guess user passwords rapidly and that has security implications.
Vulnerabilities are referenced and tracked by their CVE number, and some are infamous enough to get their own name and logo, for example, meltdown.
Vulnerabilities can result in sensitive information disclosure, denial of service (DOS), authentication bypass, and most dangerously, full takeover of a system (RCE) by a malicious attacker.
Vulnerabilities can be organized into 2 major classes:
This class of vulnerabilities exploit a concept in the design of a system. Meaning, there is no programmer that could implement a program with a design vulnerability in such a way that the vulnerability won’t exist.
Examples of these vulnerabilities can be seen in cryptographic programs - for example a cipher can have a flaw, and no matter in which language you use to implement the cipher, or which programmer writes the code, the program will always have this flaw (and therefore the vulnerability). More information on design vulnerabilities
This class of vulnerabilities exploit the implementation of a system. Meaning, if the program were written differently, the bug/vulnerability would not exist.
Many implementation vulnerabilities stem from unsafe memory usage. For example, when code moves user controlled input into a buffer that isn’t big enough, the buffer is overflown, causing a “buffer overflow” and overwriting memory after the buffer. If the memory after the buffer holds for example a variable saving if the user has successfully authenticated, an attacker can overwrite this value and use the vulnerability to bypass authentication.