Jul 31, 2007
Originally posted on Half an Hour, July 31, 2007.
Summary of Larry Korba's talk at the IFIPTMA Conference in Moncton.
Data breaches: we are concerned not just about the breaches, but also data quality. We need to make sure data is accurate.
An organization has clients, which give them data. As more clients are added, and through marketing and other activities, the organization has a great deal of data. This is facilitated by cheap storage. This leads to risks, such as identity theft and fraud.
Contributing factors include expanding networks, growing e-commerce, complex software, greater pressure to reduce time to market, and the ubiquity of computers.
ID thieves are able to gain trust using various methods, including imitation, diversion to criminal sites, and stealing accounts and passwords. ATM fraude is an example. Interestingly, the people who perform the fraud network among themselves to become more effective - it's all very well organzied.
What do we need to combat this?
- inexpensive, effective multifactor authentication
- biometrics - something that is privacy aware (an iris, for example, can tell a lot of health info about a person), low cost, harmless, easy to use, low error rates
Privacy without accountability doesn't make any sense at all, but there's a lot of laws and agreements about services provided. It's complicated. There are so many different systems in place; trying to integrate them is a challenge. Even finding the data can be a challenge, knowing who touched the data, when, why. How do you layer compliance information - people may not be authorized to see breach information.
How do you establish privacy and accountability, then? Audits? Automation? Try to use extant text logs, machine learning techniques, knowledge visualization and more. There is a real cacophony of noise.
Commoditization of software: people can go out and buy computers very easily, software too. You can find what you need very quickly. For example, Google's code search.
The line between hardware and software is becoming blurred. You can have systems that simulate what hardware does - you can deal with hardware problems with a software patch. A system like Xen, which is a virtual machine, can run faster than Windows itself. Or you might want to look at Amazon's 'elastic compute cloud', which is an online computer emulation you can rent.
A system is only as secure as its weakest point. Example: Richard Feynman noticed construction workers building a facility using a hole in a fence. Feynman also got involved in safe-cracking - he would try to find patterns in the security.
From the attacker point of view, software is a commodity. Any software you can imagine can be found, legally and otherwise, including hacking and cracking software (metasploit framework).
From a security implementor's point of view - stupid defenses only keep out stupid attackers.
Planning for security in the design stage is rarely done. But people should do things like: be explicit about programming language defaults, understand the internal representation of data types, know how memory is used, understand thread and object use, and rigorous unit testing.
You need to undersatdn how the code is compiled - because that's what the attacker will do. Never assume your system is secure, never assume there are no bugs, especially if you try to use 'home-brew' crypto.
Also - the users are the weakest link in the chain. You have to think of how the users will circumvent security you put into place. Security is not convenient. People in charge of security become complacent when, for example, they have a powerful firewall.
All good plans include the human element. User involvement in early research and development is vital to this. You need to assess protocols against what the user expects. What they need. What they understand. Passwords, for example, are often stored on post-it notes, or shared with other users.
Some work is being done analyzing teh Enron email data set (several hundred thousand emails) including the distribution of how passwords were passed around the organization. The same passwords were given out, weak password were used.
How to improve systems: combine techniques. Learn from other fields: eg. the game 'Spore' - knowledge display, visual programming, intelligent agent design; also eg. Digg, Kartoo, intelligent computing - an associative memory.
Data-mining techniques related to trust:
The problem of security is making sense of a lot of information all over the place. There are a lot of false negatives. The interesting part is when you combine data from all over. Visualizing complex data - how do you understand a network of hundreds of thousands of records.
It's like security intelligence. You may know you have a problem with a person because of past actions, but it's very difficult to know what's happening now. A lot of the results are based on the expertise of the analyst. The analyst is the filter. We need to be able to learn from the analyst, or have some community based filtering tools. Understanding how an analyst uses tools effectively changes from one to the next.
Trust:
Are trust computing platforms ready yet? Can they make DRM more effective, ensure software revenues, and so on. These can also make software applications more secure. You have an effective set fo tools for new effective security techniques. But there are issues with the implementation - there could be backdoors, etc.
(At this point Korba's computer entered stand-by mode and the talk paused... )
The situation is, there is a lot of data all over - you need to determine what data is trustworthy and what is not, rapidly. You need to learn the trust value of individuals, etc.
Attackers never rest. The sophistication of the attacks gets more impressive. 'Kits' are more easily available allowing anyone to break into computing systems without any real understanding. Attackers are not building temporal delays into attacks, and more.
Some questions: can security be measured? Computer systems are far too complex for this sort of analysis, making it difficult to assess security systems. And they rely on things going on in the operating system. One attacker can make a mockery of a 30 person-year project.
You need to think like an attacker - think evil. You need to keep things simple for the user, but effective. If you ask things of them, you need to think attainable. You need to be comprehensive.
Questions: I raised the question - what is the impact of the company's agenda on security - the company is not always benign, security is not always used for good, sometimes the 'hackers' are not 'evil' - eg. DVD Jon - wouldn't security be advanced if security industry was deliberately neutral on such matters?
Response: I can't see companies being evil like that
(I'm sitting here thinking, he read through thousands of Enron emails, but doesn't see how companies can be evil?)
Follow-up: described a credit card company requiring people to enter their ownline banking password - shifting risk from themselves to customers, increasing risk of phishing.
Follow-up: we are working on tools to enhance privacy, we are basically disregarding accountability.
Summary of Larry Korba's talk at the IFIPTMA Conference in Moncton.
Data breaches: we are concerned not just about the breaches, but also data quality. We need to make sure data is accurate.
An organization has clients, which give them data. As more clients are added, and through marketing and other activities, the organization has a great deal of data. This is facilitated by cheap storage. This leads to risks, such as identity theft and fraud.
Contributing factors include expanding networks, growing e-commerce, complex software, greater pressure to reduce time to market, and the ubiquity of computers.
ID thieves are able to gain trust using various methods, including imitation, diversion to criminal sites, and stealing accounts and passwords. ATM fraude is an example. Interestingly, the people who perform the fraud network among themselves to become more effective - it's all very well organzied.
What do we need to combat this?
- inexpensive, effective multifactor authentication
- biometrics - something that is privacy aware (an iris, for example, can tell a lot of health info about a person), low cost, harmless, easy to use, low error rates
Privacy without accountability doesn't make any sense at all, but there's a lot of laws and agreements about services provided. It's complicated. There are so many different systems in place; trying to integrate them is a challenge. Even finding the data can be a challenge, knowing who touched the data, when, why. How do you layer compliance information - people may not be authorized to see breach information.
How do you establish privacy and accountability, then? Audits? Automation? Try to use extant text logs, machine learning techniques, knowledge visualization and more. There is a real cacophony of noise.
Commoditization of software: people can go out and buy computers very easily, software too. You can find what you need very quickly. For example, Google's code search.
The line between hardware and software is becoming blurred. You can have systems that simulate what hardware does - you can deal with hardware problems with a software patch. A system like Xen, which is a virtual machine, can run faster than Windows itself. Or you might want to look at Amazon's 'elastic compute cloud', which is an online computer emulation you can rent.
A system is only as secure as its weakest point. Example: Richard Feynman noticed construction workers building a facility using a hole in a fence. Feynman also got involved in safe-cracking - he would try to find patterns in the security.
From the attacker point of view, software is a commodity. Any software you can imagine can be found, legally and otherwise, including hacking and cracking software (metasploit framework).
From a security implementor's point of view - stupid defenses only keep out stupid attackers.
Planning for security in the design stage is rarely done. But people should do things like: be explicit about programming language defaults, understand the internal representation of data types, know how memory is used, understand thread and object use, and rigorous unit testing.
You need to undersatdn how the code is compiled - because that's what the attacker will do. Never assume your system is secure, never assume there are no bugs, especially if you try to use 'home-brew' crypto.
Also - the users are the weakest link in the chain. You have to think of how the users will circumvent security you put into place. Security is not convenient. People in charge of security become complacent when, for example, they have a powerful firewall.
All good plans include the human element. User involvement in early research and development is vital to this. You need to assess protocols against what the user expects. What they need. What they understand. Passwords, for example, are often stored on post-it notes, or shared with other users.
Some work is being done analyzing teh Enron email data set (several hundred thousand emails) including the distribution of how passwords were passed around the organization. The same passwords were given out, weak password were used.
How to improve systems: combine techniques. Learn from other fields: eg. the game 'Spore' - knowledge display, visual programming, intelligent agent design; also eg. Digg, Kartoo, intelligent computing - an associative memory.
Data-mining techniques related to trust:
The problem of security is making sense of a lot of information all over the place. There are a lot of false negatives. The interesting part is when you combine data from all over. Visualizing complex data - how do you understand a network of hundreds of thousands of records.
It's like security intelligence. You may know you have a problem with a person because of past actions, but it's very difficult to know what's happening now. A lot of the results are based on the expertise of the analyst. The analyst is the filter. We need to be able to learn from the analyst, or have some community based filtering tools. Understanding how an analyst uses tools effectively changes from one to the next.
Trust:
Are trust computing platforms ready yet? Can they make DRM more effective, ensure software revenues, and so on. These can also make software applications more secure. You have an effective set fo tools for new effective security techniques. But there are issues with the implementation - there could be backdoors, etc.
(At this point Korba's computer entered stand-by mode and the talk paused... )
The situation is, there is a lot of data all over - you need to determine what data is trustworthy and what is not, rapidly. You need to learn the trust value of individuals, etc.
Attackers never rest. The sophistication of the attacks gets more impressive. 'Kits' are more easily available allowing anyone to break into computing systems without any real understanding. Attackers are not building temporal delays into attacks, and more.
Some questions: can security be measured? Computer systems are far too complex for this sort of analysis, making it difficult to assess security systems. And they rely on things going on in the operating system. One attacker can make a mockery of a 30 person-year project.
You need to think like an attacker - think evil. You need to keep things simple for the user, but effective. If you ask things of them, you need to think attainable. You need to be comprehensive.
Questions: I raised the question - what is the impact of the company's agenda on security - the company is not always benign, security is not always used for good, sometimes the 'hackers' are not 'evil' - eg. DVD Jon - wouldn't security be advanced if security industry was deliberately neutral on such matters?
Response: I can't see companies being evil like that
(I'm sitting here thinking, he read through thousands of Enron emails, but doesn't see how companies can be evil?)
Follow-up: described a credit card company requiring people to enter their ownline banking password - shifting risk from themselves to customers, increasing risk of phishing.
Follow-up: we are working on tools to enhance privacy, we are basically disregarding accountability.