CiRE's Tech Stuff
Just another weblog

Static Security / Fail Security pt. 1

The most prevalent, dangerous, and negligent form of security is the “batten-down-the-hatches” mindset, in which firewalls, remote-authentication, and layers upon layers of poor-planning (also called obfuscation by more enterprising individuals) is passed-off as security. This “static” view of security is easy to implement, maintain, and, ultimately, defeat.

CMMI has offered a model by which continual mandated maturing of a system’s (not simply computer, but any system’s) processes can be effectively translated to continual improvement, and yet this has not been adapted as it should be to the realm of security.

Security across the Enterprise cannot and should not be compartmentalized into “physical”, “electronic”, or any other type. Security as Data Assurance, which is the only true end of security in the Enterprise, must have oversight over all systems within, both computer and human. This means, quite obviously, that an Enterprise’s Security group/department must include all of the physical security on-site, and all electronic security.

What this group must also do, however, is be responsible for disaster-recovery of data and system structure, and even standards and conventions used by other departments. Thus, a security group will not be isolated from any other department, but constantly liasing with them.

The code that your developers produce to be run on your in-house systems must be tested for vulnerabilities prior to deployment on your systems, and your systems tested post-deployment for vulnerabilities that the new code may open in another part of the system. Computers are deterministic, so the way any two pieces of code interact can affect all involved components. It isn’t feasible to write everything in assembler, which means that low-level exploits are always a danger. The only way to avoid this is having developers adhere to programming standards, and industry best-practices. Things like naming conventions, for instance, aren’t “optional” (nor something you can be “adamantly opposed” to… as one person professed to be).

Testing for vulnerabilities must be done on a regular basis, using a wide range of methods, and areas of deficiency must be identified and paid extra attention to. Weekly penetration tests by in-house specialists should augment less frequent third-party pen-testing.

Just as IT coined the phrase “know your data” as a guiding principle, security experts must also know ALL their related components – systems and the nature of data on them, disaster contingencies, and, possibly most importantly, the nature of what they are securing against.

One of the worst transgressions of security specialists is to narrow their perceived threats to one category because of numbers or ease, instead of the likelihood of it as a threat; for instance, a defense contractor focusing on defending against “script kiddies” instead of planned, goal-oriented hacking, or compromising by an employee.

It is very easy to decide on a strategy of “stone-walling” over obfuscation or data distribution, even when it may not be the best choice. Keeping certain data on closed-circuit networks or removable media kept in a secured location can often be a much safer option than hoping your network security will not be compromised. All of this, of course, will depend on the nature of the data, and entail efficiency vs. security assessments.

More on this later, as it’s becoming light outside, and I need to get ready for class.



No Responses to “Static Security / Fail Security pt. 1”

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: