Tuesday, November 28, 2006

Cracking Syskey and the SAM on Windows Using Samdump2 and John

The following is a good tutotial on cracking XP. Just goes to show you, use strong passwords and don't let people have physical access to your machine.

Cracking Syskey and the SAM on Windows Using Samdump2 and John (Hacking Illustrated Series)

Technorati Tags:

Saturday, October 28, 2006

How To Become A Hacker

An extensive definitive guide to learning to be a hacker written by the editor of the Jargon File and author of a few other well-known documents of similar nature. Very comprehensive document that makes for an interesting read whether you want to be a hacker or just want to know more about the lifestyle. Very nice resource.

How To Become A Hacker

Technorati Tags:

Monday, September 25, 2006

ATM Passwords Found Online

Saw this today. I can tell you for a fact that the manufacturer's password is rarely changed.


Up to 70,000 US cash machines vulnerable.
Andrew Charlesworth, vnunet.com 22 Sep 2006

The manufacturers' passwords for cash machines used widely across the US are available online in an installation manual.

New York-based security researcher Dave Goldsmith, founder and president of penetration testing outfit Matasano Security, pieced together clues from a CNN broadcast and the website of Tranax Technologies, the ATM's manufacturer.

Then he searched for the ATM's installation and maintenance manual online which he said gave him enough information to hijack a Tranax Mini-bank 1500 series ATM if the manufacturer's default passwords had been left unchanged.

"My guess is that most of these mini-bank terminals are sitting around with default passwords untouched," Goldsmith told eWeek.

According to the Tranax website, around 70,000 1500 series ATMs are installed in the US.

Technorati Tags:

Wells Fargo Discloses Another Data Breach

Here we go again.

One thing I will never understand is why a bank would let an auditor take information out of the institution without having a fully encrypted disk.

This stuff is so simple to fix . . . .

Wells Fargo discloses another data breach

Friday, September 15, 2006

Who Should Bear the Cost of Phishing Attacks?

I came across a recent article from the Netcraft site that poses some interesting questions. Should Banks be responsible for monetary losses due to phishing schemes, or should customers be to blame for not protecting their information and using technology poorly?

Here is the article by Netcraft. Warning! There is a sales pitch here, I have not personally evaluated this product.

Bank, Customers Spar Over Phishing Losses

"Who should bear the cost of phishing losses: the bank or the customer? That question is at the heart of a recent dispute between the Bank of Ireland and a group of customers that fell victim to a phishing scam that drained 160,000 Euros ($202,000) from their accounts. The bank initially refused to cover the losses, but has since changed its mind and credited the accounts of nine victims, who had threatened to sue to recover their funds.

"The Bank of Ireland incident is one of the first public cases of a bank seeking to force phishing victims to accept financial responsibility for their losses, but it likely won't be the last. Phishing scams continue to profilerate, as Netcraft has blocked more than 100,000 URLs already in 2006, up from 41,000 in all of 2005. Financial institutions continue to cover most customer losses from unauthorized withdrawals. But after several years of intensive customer education efforts, the details of phishing cases are coming under closer scrutiny, and the effectiveness of anti-phishing efforts taken by both the customer and the bank are likely to become an issue in a larger number of cases.

"The issue of responsibility has been most prominent in the UK. In late 2004, the UK trade association for banks, known as APACs, began warning that financial institutions may stop covering losses from customers who have ignored safety warnings. That stance is reflected in the group's statement on customer protection.

'Banks are committed to keeping their customers' money safe and will protect customers from Internet fraud as long as they have acted with reasonable care," APACS says on its Bank Safe Online web site. "Customers must also take sensible precautions however so that they are not vulnerable to the criminal. Each case of Internet fraud is different and you can be sure that the bank will make a full investigation in the unlikely event that money is withdrawn from your account.'

"The American Banking Association, the industry group for the U.S. banking industry, is more definitive in its reassurance to customers on phishing losses. "Consumers are protected against losses," the ABA says on its web site. "When a customer reports an unauthorized transaction, the bank will cover the loss and take measures to protect your account."

"But there have been exceptions. Last year Miami business owner Joe Lopez sued Bank of America after it refused to cover $90,000 in phishing losses. Lopez' computer was infected by a keylogging trojan, which captured his login details. His funds were soon transferred to a bank in Latvia. When Bank of America refused to cover the loss, Lopez sued for negligence, saying the bank failed to warn him about the trojan.

"Where will the line be drawn between the bank's responsibility and the customer's? The handful of existing cases leave the issue unsettled, but suggest that the quality of the banks' phishing defenses will be a key point in the debate, and that in practice banks will not be able to pass on the financial risk of phishing to its customers simply through careful writing of the customer agreement, as the customer has no direct influence over the anti-phishing measures the bank takes."

Here is the link to the original story:
Bank of Ireland to refund phishing victims

Technorati Tags:

Thursday, September 14, 2006

Biometrics: Use Capacitance Dummy

I get a lot of questions about biometrics and fingerprint scanners. Especially from the Bankers I normally work with as they are under a deadline this year.

The Federal Financial Institutions Examination Council (FFIEC) issued new guidance on the risk management controls necessary to authenticate the identity of customers accessing online financial services, and has stated that US banks will be expected to comply with the rules - which includes the introduction of multi-factor authentication - by the end of 2006!

The council is an inter-agency body representing the Board of Governors of the Federal Reserve System (FRB), the Federal Deposit Insurance Corporation (FDIC), the National Credit Union Administration (NCUA), the Office of the Comptroller of the Currency (OCC), and the Office of Thrift Supervision (OTS).

The guidance, which applies to all member banks, states that firms are expected to use enhanced authentication methods when verifying online customers and states that single-factor authentication, when used as the only control mechanism, is inadequate for high-risk transactions involving access to customer information or the movement of funds.

Even where risk assessments indicate that the use of single-factor authentication is inadequate, FFIEC says financial institutions should implement multifactor authentication.

The regulator also says that banks should ensure there are reliable methods of originating new customer accounts online - as required by the US Patriot Act - and implement fraud detection systems. Banks are also expected to educate customers about the dangers of ID theft.

FFIEC says financial Institutions will be expected to achieve compliance with the guidance no later than year-end 2006.

So I get a lot of questions . . . .

Technorati Tags:

Say Goodbye Mr. Network Geek

I was intrigued by a recent blog post by Michael Farnum of An Information Security Place that laments yet another Microsoft vulnerability. Michael has decided to get out of his Infomation Security Manager role, and in my comment on his blog, I suggest we all do.

This led me to thinking a lot about security, and the "Three Legged Dog" of Confidentiality, Integrity and Availability. While these three "pillars" of information security must be understood and followed, the tasks within each of these practices have drastically changed in the last couple of years, and continues to do so at an alarming pace. While CIA defines the end goal, what we have really been doing lately is trying to stick our finger in a large dam that has already released its flood. We spend more time in defense of the corollary to CIA . . . DAD. We spend the majority of our time trying to prevent Disclosure, Alteration and Destruction. With almost 90% spent on Destruction.

Information Security workers have found themselves caught up in this wave of change. Originally, it was an important and vital job to track down the current virus threats, manage the Service Packs in [Pick your Windows flavor here], install the few hotfixes needed and call it a day. The rest of our time was spent on the important matters - defining the information we want to protect, striking the correct balance between 100% usable and 100% secure, gaining an in-depth knowledge of our environment and our user communities, training our communities on what was important and what was critical.

Remember the backlash that ensued when Microsoft reported that it would pool vulnerability information and release security announcements and fixes on the second Tuesday of every month? The big worry at that time was that there would be many more zero-day vulnerabilities to worry about, and that vulnerabilities could arise without the installed base being aware - leading to another Code Red or Sasser worm outbreak.

While that was a valid concern and continues to be true, what we really missed was how this single event changed the landscape of the typical information security worker's job. It also was one of the most brilliant marketing ploys ever foisted upon the public. While 1 or 2 vulnerabilites used to generate a firestorm of complaints and meaningful news, 8 new vulnerabilities released on Black Tuesday barely registers a blip on true news sources. If you eliminate all of the pseudo news, like vendor security blogs and patch management companies hocking their wares, the news is fairly light. Unless, of course, someone finds the vulnerability before Tuesday, or the patch itself causes further problems.

What does this mean for us? It means that X number of vulnerabilities are announced every 20 working days. Adding to the problem is that applying these patches to production systems has been problematic sometimes, and multiply that by trying to figure out which vulnerability affects which system and the job becomes full-time + a lot of hours * X. And this is only ONE software vendor.

Which leads me to the point of this article: We spend far too much time running down vulnerabilities from hardware and software vendors, and not enough time creating secure environments, understanding business needs, and finding the true security holes. Furthermore, its very difficult to convice the executive teams that this is where the money should be spent.

Lets face it, software these days contains millions of lines of code, its impossible to create without bugs, easy to break, and completely unpredictable. We have to face the future . . . these millions of lines of code do not belong on individual instances of millions of servers and PCs. What is the future? Largely, your servers will be moved to the cloud, core data will be aggregated to service providers, and network guys will be relegated to the black boxes they originally came from. Think about it, bandwidth will become large and cheaply available, and most of these services can be outsourced (Virus, Spam, Patch Management, etc.)

If there is a way to give the end user a better computing experience, reduce the cost of maintenance, and maintain or improve security, what is to keep companies from adopting this en masse?

Technorati Tags:

Monday, September 04, 2006

Authentication - Who Are You? Can You Prove It?

The following article was written by one of Compuhsare's top Security Gurus for our monthly newsletter. It is a great introduction to the concepts of authentication.


When we use the term authentication, we are referring to the process of identifying a person, confirming their identity, and securing access to that person’s accounts.

Up until recently, this has been done by employing the standard username and password. As the power of today’s PC has increased, the ability to break even well-selected passwords, becomes easier each day. This weakness is further reinforced by the FDIC guidance issued in October of this year.

The FFIEC agencies consider single-factor authentication, when used as the only control mechanism, to be inadequate for high-risk transactions involving access to customer information or the movement of funds to other parties.

The basic premise of this guidance is that simple username and passwords are not effective authentication. Our discussion of authentication focuses on three areas:
  • Identification – Who are you?
  • Multifactor Authentication – Can we verify your identity in more than one way?
  • Non-Repudiation – Can we prove a valid transaction has occurred?
First, we will look at the problems with simply using usernames and passwords. In the traditional method, our user ID is the key that the system will use to look up our password information and enable the services that we are permitted to use. The weakness in this is a very simple one, if I know that my user ID is my first initial and last name, I can make a pretty good guess that your user ID follows the same convention. I’ve guessed your user ID, now I just have to get your password.

Traditional systems rely on your password to verify that you are who you say you are. These days however, even well picked passwords are susceptible to breaking. Precompiled tables of every letter and number combination called “rainbow” tables, let an attacker run through every password combination in a matter of minutes.

If I have guessed your user ID and password, I have assumed your online identity and can perform transactions as you. This presents the problem of non-repudiation, which means, can we prove that a valid transaction was performed between valid parties? We have to verify that the transaction and parties involved in it cannot be contested. If I have guessed your username and password, how can you prove that it wasn’t you that transferred all of your funds to a numbered Swiss bank account?

Now let’s look at some of the components of strong authentication. To prove someone’s identity we can use the following simple formula. We need to have two of the three following components.
  • Something you have – a physical device of some sort, such as a card or security token
  • Something you are – a biometric identifier, such as fingerprint scan, or retinal scan of your eye
  • Something you know – a passcode that only you would know such as password or phrase, or answers to personal security questions
These three components are found in most of the emerging methods for identification or authentication and together provide the foundation for non-repudiation. Authentication utilizing two or more of these components is called “multifactor” authentication.
A good example of multifactor authentication is your ATM card. You have your card, something you have, and your PIN, something you know.

Completing a successful, secure online transaction requires several steps. First, we must validate that the site we are communicating with is the actual merchant’s site. Next, we must identify ourselves and successfully authenticate our identity. Finally, we must be able to prove that the transaction was successfully completed by both parties.

When we start our online session, we need to verify that we have a authentic connection to the web server of the institution with whom we want to do business. This is commonly accomplished using a digital certificate. A website would register with a trusted third party that validates its identity. When you connect to the website, you can view the digital certificate and verify that it is valid. In most web browsers, you can click on the lock icon in the lower right of the browser window to view the digital certificate for the site with whom you are communicating.

An additional technique that some institutions are using to further validate that a user has reached their site, is requiring that the user answer some personal security questions and identify a picture with a caption they have selected.

Next, we must identify ourselves to the web server and validate our identity. This is one of the problems in the current online banking environment. We have something we know in our username and password. However, we do not have either of the other components of multifactor authentication, something we have or something we are.

A technique that is becoming more prevalent is the use of digital signatures, which uses a technology called public/private key pairs. A key pair has two interrelated parts. The key pair is generated as a single key and then split into a public and private key. The public key is made available to anyone who wants it, while the private key is kept in secret on your PC. Your private key becomes “something you have.”

The only way to complete a secure transaction would be to use your password, something you know, and use your private key, something you have, to authorize the session. The basis for non-repudiation of this transaction is that we have used multifactor authentication to ensure that you are who you say you are.

Another technique frequently used, also incorporates “something we have.” A “token” is a physical device that constantly updates a complex algorithm. Your complete password is calculated using your PIN and the constantly changing key generated by your security token. This technique is called one-time passwords.

Using a security token allows us to incorporate something I have, my token, and something I know, my PIN, which then increases the security by making my password a constantly changing value. Again, non-repudiation of a transaction has its basis in the use of multifactor authentication to verify our identity.

In response to the FDIC guidance, Internet banking and authentication vendors are quickly developing potential solutions to this authentication challenge. No clear-cut methodology has yet emerged. The two methods we have discussed are widely used in other areas of enterprise security and we can expect some form of these techniques to start showing up in our online banking systems very soon.

Technorati Tags:

Thursday, August 31, 2006

Visa Issues Data Security Alert

BROOKFIELD, Wis. — Visa USA issued a data security alert Aug. 31 to warn merchants about the risks associated with storing magnetic-stripe and other sensitive data on point-of-sale systems. The alert recommends specific actions that merchants can take to mitigate these risks.

To support compliance with the Visa USA Cardholder Information Security Program, Visa issues security alerts when vulnerabilities are detected in the marketplace, or as a reminder about best practices.

Security vulnerability

Visa announced in a news release that it is aware of credit and debit compromises that resulted from the improper storage of mag-stripe data after transaction authorization was completed. The mag-stripe holds data in two tracks.

Track information is received by a merchant’s POS system when a card is swiped. Some merchant POS systems improperly store that data after authorization, violating Visa’s operating regulations. Hackers are aware of the vulnerability and are targeting certain POS systems to steal this information.

Visa also has observed compromises involving other data elements, namely card verification value 2 (CVV2), PINs and PIN blocks. CVV2 is the 3-digit number typically found on the signature panel of the card. PIN blocks are encrypted versions of PINs.

According to Visa, merchants may only store specific data elements, including the cardholder’s name, primary account number, expiration date and service code, from the mag-stripe to support card acceptance. But that information must be protected in accordance with the Payment Card Industry Data Security Standard.

Merchants may mistakenly believe they need to store prohibited elements to process merchandise returns and transaction reversals, Visa says. Acquirers should ensure their merchants have proper processes for each type of transaction.

Recommended mitigation strategy

To safeguard their systems and reduce risk from a compromise, merchants should make sure that they are not storing prohibited data.

Visa offers the following suggestions:

• Ask the software vendor to verify that your software version does not store mag-stripe data, CVV2, PINs or encrypted PIN blocks. If it does, those data elements must be removed immediately.

• Ask the software vendor to share a list of files written by the application, and a summary of the content to verify prohibited data is not stored.

• Review custom POS applications for any evidence of prohibited data storage. Eliminate any functionality that enables storage of this data.

• Search for and expunge all historical prohibited data elements that may be residing within your payment-system infrastructure.

• Confirm that it’s necessary to store the data you’re keeping. If not, don’t store it.

• Verify that your POS software meets Visa Payment Application Best Practices. A list of PABP compliant applications is available on Visa’s Web site.

Technorati Tags:

Wednesday, August 30, 2006

Yet Another Loss of Customer Data

SAN FRANCISCO, Aug 29 (Reuters) - AT&T Inc. (T.N: Quote, Profile, Research) said on Tuesday that computer hackers illegally accessed credit card data and other personal information from several thousand customers who bought DSL equipment from AT&T's online store.

The phone company said it is notifying "fewer than 19,000" customers whose data was accessed over the past weekend.

The company said it noticed the hacking "within hours," immediately shut down the online store, notified credit card companies and is working with law enforcement agencies to investigate the incident and find the hackers.

Technorati Tags:

Monday, August 28, 2006

Dear Bankers: Your Vault is Not Safe

Several high-profile examples of data tape loss during transit have put customers on alert over the risk that their confidential information may be subject to loss due to movement of backup tapes. For example, Bank of America last year was dealt a severe blow when the company admitted to losing data tapes en route to a data center. The tapes reportedly featured employee and personal information on 1.2 million federal workers.

This year’s news has been full of tape losses from Wells Fargo, Bank of America, Iron Mountain, etc. This, on top of the federal regulators hightened focus on Disaster Recovery and Business Continuity due to Katrina and other disasters, has put many financial institutions in a quandary on how to handle backups safely while still providing quick access for disaster recovery needs.

The age-old problem of 100% usable vs. 100% secure rears its ugly head again.

For years I have been telling my financial institution clients that storing your tapes in your vault, or in your sock drawer, is not an adequate recovery solution. Not to mention, it is inherently not secure. Now I am telling you that your vault isn’t secure either.

What? My vault is not secure? That’s right, it’s not. I’m going to share a true story with you now, that is so shocking, so scary, that I cannot even reveal what location this took place in. In order to protect my client’s identity, I will even have to fudge the numbers a little, but rest assured, I am rounding down!

The story starts with a bank robbery. A bank robber walked into a very remote bank branch and demanded all of the money in the teller drawers. When finished, he asked for the security videotapes. The branch manager attempted to explain, at gunpoint, that there are no security tapes and that the cameras were 100% digital.

Not being the brightest bank robber, he did not understand or believe the manager and took him to the vault. The bank robber then proceeded to steal the banks DATA tapes, thinking that they were videotapes.

Unfortunately, these tapes contained the names, addresses, social security numbers, birthdates, account numbers, and bank balances of 15,000 active bank customers, and another 8,000 inactive customers.

So your vault is not safe either. So what is the solution? You must encrypt your data at rest. Period. There are many solutions that allow for online data backup, encrypted, that allows for block level daily changes and keeps the data fully encrypted in transit and at rest. At a minimum, data tapes must not be able to be read in plaintext. We are just not in that world anymore.

In fact, if you are storing any of your non-public private information in a plaintext format, it is only a matter of time and effort before you are going to be exposed.

Technorati Tags:

Friday, August 25, 2006

Why You Should Perform Regular Security Audits

This was a real nice articel I found today out of Australia. Great points on why Security Audits are important:

Jonathan Yarden, TechRepublic - August 25, 2006

In less than a decade, Internet security has evolved from an almost esoteric topic to become one of the more important facets of modern computing. And yet it's a rarity to find companies that actually consider information security to be an important job function for all workers—and not just the IT department's problem.

Unfortunately, it's the general opinion of most companies, particularly at the management level, that their computer systems are secure. However, one of the only ways to determine whether this is actually true is by performing a thorough audit of computer systems. But most companies don't make it a habit of performing regular security audits, if they perform them at all.

In my experience, many companies base their Internet and information security strategy entirely on assumptions. And we're all familiar with that old saying about making assumptions.

But I don't entirely blame companies for failing to conduct periodic computer security audits. Frankly, the complexity and variability of administering and interpreting a comprehensive computer systems audit is equal to the complexity and variability of the systems used in corporations.

Several dozen popular commercial network and computer security auditing programs are currently available. While I've used several myself, I've honestly found no favorites. These tools produce mountains of useful information, but understanding what to do with the data is no simple job.

Most computer network and system security audits begin the same way. An automated program gathers information about hosts on the corporate network, identifying the type of network device. If applicable, it also scans the TCP and UDP services that are present and "listening" on the host, and it might even determine the versions of the software supplying an Internet service.

In most cases, the process involves at least two automated scans—one of internal networks, which are generally behind a firewall, and one of the Internet subnet used by the corporation. If a security audit doesn't include both an interior and exterior scan, then you're not getting a complete picture of what hosts are on your organisation's network.

In addition, I also recommend that companies perform their own auditing whenever possible. If not, it's vital that you select an Internet security vendor you don't currently do business with.

Security audits produce a huge amount of data, and you need to be prepared to review this information in order to truly benefit from the audit. It's also important to understand that a computer security audit may report potential problems where no real issue exists.

For example, an isolated switch from 1998 in an internal network could quite possibly be running firmware that's vulnerable to a denial-of-service flood. Should you replace it? Probably not. Nor should you be too concerned about the ancient Windows NT 4 system running outdated voice mail software that's subject to an obscure TCP sequence number exploit. It's not running anything other than a specialised application for voice mail services, and it's behind the firewall.

But some issues should concern you. For example, it's a good idea to disable guest accounts on dedicated Windows servers. Don't run IIS on Windows domain controllers, and DNS servers should not be running services other than DNS either.

However, a security audit may not always identify these issues, and one could debate whether it's actually a security problem. When there's doubt, disable unused services, or determine a secure solution.

The major problems with security audits are that they typically produce either too much data or not enough. A dearth or an excess of data can lead to misinterpretation and even exploitation of the information. Fear remains a very effective way to sell unnecessary equipment and services to companies that don't truly understand security.

For example, one company's recent Internet security audit completely ignored the security issue of direct VPN connections to the internal network and a dial pool, both of which completely bypassed the firewall. Coincidentally, while the same vendor that performed the audit was busy replacing functioning internal network equipment due to "vulnerable" firmware, one of the many recent Sober flavors was busy spreading internally, sourced from a remote office connected via a VPN.

Knowing what is and what isn't a significant issue goes to the very core of understanding Internet and information security. While assumptions can be correct, in many cases, they're dead wrong. Perform regular security audits on your organisation's network to be sure. And if you're not using a particular TCP or UDP service, shut it off.

Technorati Tags:

Thursday, August 24, 2006

Update for MS06-042 released late.

Microsoft released the patch for MS06-042 one day late due to technical problems.


I'm not sure I would put this patch into production as there were issues with the patch.

Meanwhile, my recommendation is to implement Microsoft’s “workaround”.
1. Start Internet Explorer 6.
2. On the Tools menu, click Internet Options, and then click the Advanced tab.
3. In the Settings box, click to clear the Use HTTP 1.1 check box under HTTP 1.1 settings, and then click OK.

Technorati Tags:

Internal Network Security Trends

This was an interesting article I came across today. I feel like I have been yelling about "aggressive patch management" and stricter access control for mobile employees for 5 years now. Enjoy:

Don’t Forget About Network Security Inside Your Perimeter

High-profile network security breaches have been headline news these last few months, and the face of network crime is becoming more ominous with the mass theft of sensitive personnel information. The boundaries of the network are also changing. According to Forrester Research, “Remote Access and Business Partner connectivity means the [network] perimeter is disappearing.”

Michael Rothschild, director of marketing for CounterStorm (
www.counterstorm.com), developer of the CounterStorm-1 internal network security solution, sees hacking shifting in the past 48 months from the simple defacing of Web sites to the theft of corporate data. He also sees the perpetrators of such cyber attacks shifting from career hackers to organized crime.

Turning Your Security Focus Inside Your Network Perimeter

So many organizations focus their network security on perimeter defenses such as firewalls and intrusion detection, but they also need to focus inside their network perimeter.

CounterStorm’s Rothschild says that beyond the basic security measures of deploying firewalls and antivirus software is the need to establish aggressive patching strategies for both server and client PCs.

Rothschild also emphasizes being diligent about establishing and enforcing internal IT policies for network access. He says, “Mobile workers, road warriors, and home office workers need policies to govern how they access your corporate network.”

Steve O’Brian, vice president of product management and marketing for Granite Edge Networks (
www.graniteedgenetworks.com), developers of the Granite Edge ESP appliance-based internal network security solution, says, “Small to midsized enterprises have to support and manage many of the same business processes and IT needs as large enterprises but struggle with efficiencies due to limited staff and budget. In order for IT to overcome these efficiency battles and
become enablers for enhancing business performance and overall competitive advantage, data center/IT managers need to focus on deploying low-support solutions that improve core business operations.”

Get the full article here:


Technorati Tags:

Monday, August 21, 2006

FFIEC Releases FAQ on Authenticaion in an Internet Banking Environment

The Federal Financial Institutions Examination Council (FFIEC) member agencies released a frequently asked questions document (FAQs) to aid in the implementation of the interagency guidance on Authentication in an Internet Banking Environment issued October 12, 2005.

The authentication guidance, which applies to both retail and commercial customers, specifically addresses the need for risk-based assessment, customer awareness, and security measures to reliably authenticate customers remotely accessing their financial institutions’ Internet-based financial services. The FAQs are designed to assist financial institutions and their technology service providers in conforming to the guidance by providing information on the scope of the guidance, the timeframe for compliance, risk assessments, and other issues.

Get the FAQ here.

Technorati Tags:

Sunday, August 20, 2006

Biometrics History -- Looking at Biometric Technologies from Past to Present

Biometrics History -- Looking at Biometric Technologies from Past to Present
By Alice Osborn

The ancient Egyptians and the Chinese played a large role in biometrics' history. Although biometric technology seems to belong in the twenty-first century, the history of biometrics goes back thousands of years. Today, the focus is on using biometric face recognition and identifying characteristics to stop terrorism and improve security measures. Once an individual is matched against a template, or sample, in the database, a security alert goes out to the authorities. A person's space between the eyes, ears and nose provides most of the identifying data.

The ACLU and other civil liberties groups are against the widespread use of these biometric technologies, although they acknowledge the necessity of their presence in airports and after the London bombings. Biometric technologies also need to achieve greater standardization and technological innovations to be recognized as a trustworthy identity authentication solution.

A timeline of biometric technology

• European explorer Joao de Barros recorded the first known example of fingerprinting, which is a form of biometrics, in China during the 14th century. Chinese merchants used ink to take children's fingerprints for identification purposes.

• In 1890, Alphonse Bertillon, a Parisian police desk studied body mechanics and measurements to help identify criminals. The police used his method, the Bertillonage method, until it falsely identified some subjects. The Bertillonage method was quickly abandoned in favor of fingerprinting, brought back into use by Richard Edward Henry of Scotland Yard.

• Karl Pearson, an applied mathematician studied biometric research early in the 20th century at University College of London. He made important discoveries in the field of biometrics through studying statistical history and correlation, which he applied to animal evolution. His historical work included the method of moments, the Pearson system of curves, correlation and the chi-squared test.

• In the 1960s and '70s, signature biometric authentication procedures were developed, but the biometric field remained fixed until the military and security agencies researched and developed biometric technology beyond fingerprinting.• 2001 Super Bowl in Tampa, Florida -- each facial image of the 100,000 fans passing through the stadium was recorded via video security cameras and checked electronically against mug shots from the Tampa police. No felons were identified and the video surveillance led many civil liberties advocates to denounce biometric identifying technologies.

• Post 9/11 -- after the attacks, authorities installed biometric technologies in airports to ID suspected terrorists, but some airports, like Palm Beach International, never reached full installation status due to the costs of the surveillance system.• July 7th, 2005 London, England -- British law enforcement is using biometric face recognition technologies and 360-degree "fish-eye" video cameras to ID terrorists after four bombings on subways and on a double-decker bus. In fact, London has over 200,000 security cameras and surveillance cameras that have been in use since the 1960s.

Today and looking forward

Biometrics is a growing and controversial field in which civil liberties groups express concern over privacy and identity issues. Today, biometric laws and regulations are in process and biometric industry standards are being tested. Face recognition biometrics has not reached the prevalent level of fingerprinting, but with constant technological pushes and with the threat of terrorism, researchers and biometric developers will hone this security technology for the twenty-first century.

Copyright © 2005 Evaluseek Publishing.

About the Author

Alice Osborn is a successful freelance writer providing practical information and advice about everything related to CCTV surveillance systems and related topics. Her numerous articles include tips for saving both time and money when shopping for video security products; equipment reviews and reports; and other valuable insights. Increase your knowledge about CCTV equipment and security cameras when you visit Video-Surveillance-Guide.com today!

Article Source: http://EzineArticles.com/?expert=Alice_Osborn

Technorati Tags:

Thursday, August 17, 2006

Highlights of the 2006 CSI/FBI Computer Crime and Security Survey

I felt like Steve Martin in "The Jerk" this morning, as I was jumping up and down in glee when the new 2006 CSI/FBI Computer Crime Survey arrived on my desk. It's not as easy to yell as "The Phonebook is here! The Phonebook is here!", but you get the point. Each year the Computer Security Institute and the San Francisco FBI Computer Intrusion Squad conduct this exciting survey. Going on 11 years, it provides interesting insights into the present state of security and also the current trends we are seeing in our industry. In this post, I'll be covering the highlights and key findings of the survey.


Overall expenditures in IT are hard to understand from the survey, as company size is broken out by revenue. While smaller companies under $100 million in revenue experienced a 200 to 300 percent increase in security expenditures per employee, larger companies experienced a decline in overall spending.

Companies under 10 million in annual sales are spending a whopping $1664 per employee annually on security and security training, while companies over 1 billion are averaging only $218 per employee. It seems like the evil dream of hurting Big Corporate America through cyber-crime is actually crippling the little guy.

Most respondents felt that not enough money was being budgeted for end-user security training. Companies with revenue over 1 billion spend less than $20 on end-user security awareness training. Economies of scale notwithstanding, this strikes me as exceptionally low. Isn’t the end user the greatest threat?

Frequency, Nature and Cost of Breaches

The leading causes of financial loss cited in the survey were:

1. Virus
2. Unauthorized Access
3. Laptop / PDA Theft
4. Theft of Proprietary Information

68% of those losses were from insider threats. This number is down slightly, but it is clear that the problem is not solved by building a more robust perimeter. One interesting statistic in the report is that unauthorized use is down this year, to 52%. Down to 52%! 52% of the companies surveyed reported unauthorized use of their computer systems! Doesn't this bother anyone? I guess it is an improvement over the 70% finding in 2000.

While most attack types have been declining over the past 7 years of the survey, there were several attack types that were on the rise:

1. Financial Fraud
2. System Penetration
3. Sabotage
4. Misuse of Public Web Site
5. Web Site Defacement

All of these attack types were reported by less than 20% of the respondents, but the rise in these categories is something to watch carefully.

64% of all respondents had some sort of website incident, with 59% reporting more than 10 incidents per year. There is obviously something going on here. As organizations have become better at protecting the perimeter with Firewalls, IDS and IPS systems, the remaining Achilles heel is the organization’s public web site, which must remain somewhat open for business.

We began our Deep Web Application Scanning offering in early 2005, and have seen this portion of our business grow rapidly as people of malicious intent are down to the final frontier. Attacking the web server is easy, fairly unsophisticated, and simple to perform with off-the-shelf tools.

Risk Management

Only 29% of respondents deferred any risk by using external “cyber insurance”. You would expect with all that has happened in the last 5 years that organizations would be more willing to pay for insurance. I guess we need a few more tapes with 5 million credit card numbers to disappear.


Overall there was a slight decrease in IT security outsourcing. While not statically significant (63% to 61%), it is interesting given the current outsourcing trend. It appears that IT security is being considered in a different light than regular IT projects and is not riding the outsourcing wave.


While overall financial losses are down this year, it is still apparent that organizations are still not willing to spend on security technology that could really help them. I suspect that part of this is that many companies do not know exactly how much risk they are carrying because they have not performed a quantitative risk assessment. It is not enough to label your risk as High, Medium, Low. You need to put hard dollars on these items to understand the true impact. This also helps IT organizations in getting the funding they need. If I can reduce 2M in risk with a $50,000 patch management program, why wouldn’t I?

There is also still a definite lack of end user awareness training when it is assumed that the "user is the weakest link." Also, it is clear that the largest cause of financial loss is not the largest concern of most IT departments. Viruses only ranked 5th on the respondents list of concerns behind:

1. Data Protection (Classification, Identification, Encryption)
2. Web Application Security
3. Regulatory Compliance
4. Identity Theft

One thing I would like to see in the study covered in future years is more data on how these attacks are carried out. How many were due to poor access lists, poor administrative control, or social engineering? For instance, viruses are the leading cause of financial loss, we know that, but how are these viruses introduced into the network? Is it people clicking on e-mail links, surfing the web, or is it just poor patch management? Until you can answer those questions, it is hard to determine where an organization can realize the best reduction of risk at the least possible cost.

Technorati Tags:

Wednesday, August 16, 2006

CNET: RFID Passports Arrive for Americans

The U.S. State Department is about to begin handing out RFID-equipped passports, despite lingering security and privacy concerns.
By Anne Broache
Staff Writer, CNET News.com
Published: August 16, 2006, 4:43 PM PDT

"A first wave of U.S. passports implanted with radio tags will soon begin making their way into the hands of American travelers despite lingering privacy and security concerns, federal officials said Monday.

Not long after researchers at a pair of security conferences in Las Vegas demonstrated potential risks associated with the new documents, the U.S. State Department insisted the documents are tamperproof and said it had begun producing them at the Colorado Passport Agency, which serves applicants from that state and the Rocky Mountain region.

The agency said it plans to issue the documents through the nation's other passport facilities within the next few months, as part of its original plan to make all future passports electronic by October. It was unclear how many e-passports would be mailed out this year, although a State Department representative said Monday that the agency expects to distribute a total of 13 million passports by year's end.

The new passports, which have been undergoing testing for several months and have already been issued to some U.S. diplomats, will be equipped with radio frequency identification (RFID) chips that can transmit personal information including the name, nationality, sex, date of birth, place of birth and digitized photograph of the passport holder. They employ a "multilayered approach" to protect privacy and reduce the possibility that passersby can skim data from the books, the agency said.

"The Department of State is confident that the new e-passport, including biometrics and other improvements, will take security and travel facilitation to a new level," the agency said in a statement.

State Department officials claim that a layer of metallic antiskimming material in the front cover and spine of the book can prevent information from being read from a distance, provided that the book is fully closed. The document will also employ a cryptographic technique called Basic Access Control, which means the RFID chip unveils its contents only after a reader successfully authenticates itself as being authorized to receive that information.

State Department spokesman Kurtis Cooper dismissed recent concerns raised by security researchers that the passports could nevertheless be "cloned"--that is, copied and used in a forged passport. The agency is confident that other security features built into the book would foil would-be imposters, he said.

The cloning technique demonstrated at the Las Vegas events is simple: It requires only a laptop equipped with a $200 RFID reader and a smart card programmer. The laptop's software scanned information from the RFID chip and wrote it to the smart card, which can then be embedded in a fake passport.

Security researchers have not, however, figured out how to alter the personal information, which is protected with a digital signature designed to enable unauthorized changes to be detected. Creating a fake passport therefore would be most useful to anyone who can forge the physical document and resembles the actual passport holder.

"The digital photograph of the passport holder embedded in the data page and the digital signature on the data, combined with our human U.S. border inspection process, would prevent someone from using a forged passport to gain entry into the United States," Cooper said in a telephone interview."

Technorati Tags:

Managing Employee Access

Ok, so we have performed our Risk Assessment, classified our assets and data so that we know what and where everything is that we are trying to protect. Next, we need to consider who needs access to the which data, and how we are going to facilitate this.

You can see how important that Risk Assessment is now. If you don't know what you are trying to protect and where it resides, you don't stand a chance.

There are two parts of Managing Employee Access. The first, is authentication, the second access.

Technorati Tags:

Tuesday, August 15, 2006

How to Hack a Bank

To illustrate the points we have covered so far, I’d like to share a real-life story with you that happened to me a few months ago. We were hired by the CIO of a large bank in Texas to perform an internal and external penetration test and site assessment. What happened within the first 45 minutes will hopefully shock you. We have been talking about the first steps in building an Information Security Program that really works, and most importantly we are beginning to lay down the foundation of a “layered” security approach. This story will clearly illustrate why that is a good idea.

When I begin to perform a site assessment, I will usually arrive at the bank’s main administrative office 30 minutes before opening. While in the parking lot, I can easily check for wireless devices, and drive around the building looking for possible entrances. I especially look for employee entrances, designated smoking areas, and external telco closet doors. As the traffic begins to pick up in the morning, and the branch is fully open, I will attempt to “piggy-back” an employee into the institution.

So this is how I began at my client's site. After following an employee into the back door I found myself in the hallway of the building, but all the doors in the hallway were locked! Foiled. I tried the stairway, as this was a two story building, I thought I might get lucky. No luck, stairway was locked. I found the elevator, I tried to go the 2nd floor . . . no luck . . . keycarded. Next I pressed “B” for Basement. Viola! I was now heading downstairs, which, by the way, is where the data center was.

Once downstairs, I again started checking doorways. The doorway to the data center was locked and keycarded, I wasn’t going to be that lucky today! But, lo and behold, the stairway was not locked. I went into the stairway, and made my way to the second floor. On the second floor, I found the onsite hackers dream, the TRAINING ROOM! Yes! A room full of exploitable computers, just waiting for keyloggers and pstoreview (a program that gives me all of the usernames and passwords that someone has entered into Internet Explorer). Better yet, the machines were turned on, and logged in! I closed the door slightly, to gain a “moment of obscurity” as they call it in the CIA, cracked my knuckles, plugged in my USB with pstoreview and began . . .

I started with poking around the Network Neighborhood. I immediately found a server with an interesting name “mail-old”. Hmm, that looks promising. I browsed over to “mail-old” looked for some shares, found one called “users”. Went into the users folder, found the President of the Bank’s user folder, opened that (yes, I was surprised I could get this far), and found the CIOs annual performance review, complete with Salary and performance history. Total time: 30 minutes in the parking lot, 15 minutes onsite. It turns out that “mail-old” was a server that was used for a large file transfer, and then abandoned. The entire bank file system had been copied here a month earlier. Customer data, loan files, account numbers . . . all were mine for the taking. Luckily they were paying me for this.

This little story clearly identifies how a layered security model is supposed to work, and how each layer could have stopped me, or slowed me down enough to make my attempts unsuccessful. This is what security is all about – you’ll never make a system 100% secure. 100% secure = 0% usable. 100% usable = 0% secure. Somewhere in between is the right spot, but it is a continuum. Any system can be broken, as long as you have the time and resources to work on it. Our job as security experts is to increase the work factor for the attack to such high levels that attack is near impossible or not worth the effort.

In this example these are only some of the “layers” that could have thwarted my attempt:

  • Having a keycard that prevented access to the basement. (The stairway door had to remain open as it is the only exit from the basement.)
  • Training all employees to challenge un-badged or unknown people.
  • Calling the police when a suspicious person is sitting in the parking lot of your bank for 30 minutes with a laptop.
  • Segregating the Training and Production networks.
  • Removing old files from the network.
  • Keeping all file shares restricted to an “as needed” basis.
  • Not allowing training PCs to log in automatically.
  • Not leaving PCs logged in un-attended, or using auto-logoff features.
  • Restricting training PCs from browsing the Network Neighborhood.

The next few blogs will cover the building of the layers needed to create an Information Security Program that really works . . . .

I welcome all comments!

Technorati Tags:

Monday, August 14, 2006

Creating Good Physical Security

Physical security describes measures that prevent or deter attackers from accessing a facility, resource, or information stored on physical media. It can be as simple as a locked door or as elaborate as multiple layers of armed guardposts.

The field of security engineering has identified three elements to physical security:

obstacles, to frustrate trivial attackers and delay serious ones;

alarms, security lighting, security guard patrols or closed-circuit television cameras, to make it likely that attacks will be noticed;

and security response, to repel, catch or frustrate attackers when an attack is detected.

In a well designed system, these features must complement each other. For example, the response force must be able to arrive on site in less time than it is expected that the attacker will require to breach the barriers; and persuading them that the likely costs of attack exceed the value of making the attack.

For example, ATMs (cash dispensers) are protected, not by making them invulnerable, but by spoiling the money inside when they are attacked. Attackers quickly learned that it was futile to steal or break into an ATM if all they got was worthless money covered in dye.

Conversely, safes are rated in terms of the time in minutes which a skilled, well equipped safe-breaker is expected to require to open the safe. (These ratings are developed by highly skilled safe breakers employed by insurance agencies, such as Underwriters Laboratories.) In a properly designed system, either the time between inspections by a patrolling guard should be less than that time, or an alarm response force should be able to reach it in less than that time.

Hiding the resources, or hiding the fact that resources are valuable, is also often a good idea as it will reduce the exposure to opponents and will cause further delays during an attack, but should not be relied upon as a principal means of ensuring security.

Sunday, August 13, 2006

Creating an Atmosphere of Risk Management : Part II

Continuing from yesterdays post, here are the beginning steps every company must perform to begin the process of Creating an Atmosphere of Risk Management:

Perform an IT Risk Assessment. If you haven't assessed the risks within yourenvironmentt, you cannot begin to build the controls needed to adequately control them. Any policies instituted without this foundation, are at best without support. The interviewing process of a proper Risk Assessment will also help to begin the awareness that this is indeed a serious process that the corporation is 100% invested in.

Classify Your Data. The military does this well. How can you possibly control access to your data if you don't know what type of data it is. Do you have regulated data within your company that must follow certain standards? How abouHuman ResourceHR data? How about Board Minutes? Financial Data? Marketing Plans? All of these must be put into classifications. Oh, and by the way, I am not talking just about computer data, I mean ALL data. That loan file you left on your desk during lunch? Not acceptable.

Set up an IT Steering Committee. If you don't have this, you need to start one now. Besides overseeing that the mandate of Information Technology is following the strategic mission of the corporation, but this Committee is also where the standards for security should be ratified.

Set up Board Reporting. Each and every meeting of the Board of Directors should contain a time period in which the overall IT Security Risk is reported and evaluated. This futhers the top down approach needed to bring about total awareness.

Perform Regular Testing and Training. Regular testing of security controls, especially performing regular Social Engineering testing is paramount to building awareness.

In my next post, we'll start the next step, which is Creating the Physical Security Perimeter . . . .

Technorati Tags:

Creating an Atmosphere of Risk Management : Part I


Any security professional will tell you that the weakest link in security is always people. Even in the movies, how do the antagonists gain access to secure computer systems? By taking advantage of a person with trusted access. So any Information Security Program, in order to be successful, needs to start by building an “Atmosphere of Risk Management” within the organization.

This atmosphere of security is created through raising the awareness level of all employees and through the direction and example of senior management. We cannot emphasize enough, the importance of senior management’s buy-in and involvement in establishing an atmosphere or corporate culture where security is second nature to all employees.

In many of the organizations for which I perform security assessments, lack of buy-in by senior management is evident through the setup of their user accounts. More often than not, the President, CEO,and other senior managers are found to have special access privileges that include never having to change their passwords. On top of that, their passwords are among the worst in complexity, making them easily cracked by simple dictionary methods.

How can employees be expected to follow security policies and practices when it is well known that the top managers do not follow those same policies and practices? Corporate culture is created through the actions and attitudes of the organization’s managers. Therefore, the first step in creating an atmosphere of security is for senior management to adhere to , and enforce, the same policies as everyone else.

Many organizations make the mistake of combining awareness and training simply calling it security awareness training. Awareness is not training. Awareness is an ongoing process designed to focus employees’ attention on security. Awareness presentations are intended to make individuals recognize information security concerns and respond accordingly.

Effective IT security awareness presentations must be designed with the understanding that people develop a tuning-out process known as acclimation. If the same method of providing information is continually used, no matter how stimulating it is, the recipient will selectively ignore the stimulus. Therefore, awareness presentations must be ongoing, creative, and motivational. Awareness presentations should focus employees’ attention so that the information provided will be incorporated into conscious decision-making. This process where an individual incorporates new experiences into existing behavior patterns is called assimilation.

Learning attained through a single awareness activity will tend to be short-term, immediate, and specific. Repeated awareness activities spread over time improves assimilation. Another words, security awareness training performed once a year will not be assimilated into the existing behavior patterns of individuals. Information Security Officers must develop a program of ongoing security awareness in order to building atmosphere of security.

In my next post, I will cover some steps that every organization must take to begin this process . . .