Our results lead to three conclusions: First, built-in commands are effective, but manufacturers sometimes implement them incorrectly. Second, overwriting the entire visible address space of an SSD twice is usually, but not always, sufficient to sanitize the drive. Third, none of the existing hard drive-oriented techniques for individual file sanitization are effective on SSDs. This third conclusion leads us to develop flash translation layer extensions that exploit the details of flash memory’s behavior to efficiently support file sanitization. Overall, we find that reliable SSD sanitization requires built-in, verifiable sanitize operations.»
We’ve been covering the progression of SandForce for over a year now, creator of smart SSD processors that extend the life of flash storage by better spreading writes across them, boosting performance and reliability along the way. This, according to the company, makes them reliable enough for enterprise use, and IBM has added its vote of support, configuring a 9189 Power 780 server with 56 177GB SSDs (10.5TB in all) sitting behind SandForce’s SF-1500 processor. That combination, when running the TPC-C benchmark, delivered a performance of 150,000 transactions per minute per CPU core. That’s 50 percent higher (per-core) than other entries in the TPC-C benchmark — and considerably cheaper, too. IBM’s configuration is set to be available around October of this year, perhaps ushering in a new era of the platter-free enterprise.»
First session was an excellent keynote by Mrs. Pamela Jones Harbour, Commissioner at US Federal Trade Commission. She “asked the tough questions” and pointed to some “storm clouds”.
First «storm cloud» she talked about was asymmetry between users and companies: consumers may not understand when they are using cloud computing and it is hard for them to delimitate what data they are willing to share. In the offer side, providers do not offer consumers minimum choices, they present «incomprehensive privacy clauses», they don’t «adequately disclose the scope» and hide behind «click wrapped agreements».
Second «storm cloud» was (in)security. Cloud services are potentially unsecure and there’s a potential opportunity for providers to avoid responsibility and accountability.
Third «storm cloud» was competition. There’s a great range of choices and if the consumer’s side does not request accurate information and an adequate level of security in the competitive process, government may have to make an intervention on the market. Turbulent times are forcing companies to low cost, so they are forced by the market to lower best practices.
Fourth «storm cloud» was Incompatible jurisdiction. There is an uncertain state of the law in the USA and there’s being some lobbying at federal legislation on cloud computing. There’s a need to identify challenges and develop good practices. In any case, rules have to be process oriented, not technology oriented, not specific on technology requirements.
Final message was: ask the tough questions but don’t fear the challenge of the cloud.
Most free tools used for computer forensics run on UN*X and most forensics distributions are based on Linux. At first they were based on Knoppix and later they started to use Ubuntu as a base. In the change we missed the ability to load the OS to ram. Now you need to hack it a bit to boot to ram, but I’ll talk about this some other day…
The fact is that sometimes I miss having a persistent UN*X installation.
I’ve always loved BSD flavor, partly because I’ve had good experiences with it. In 2004 we had to do video and multichannel audio transmission Montreal – Barcelona in the context of Artfutura 2004. Need to do firewall and traffic prioritization minimizing lag and without wasting the precious 100Mbps connection we got? OpenBSD + PF did the trick.
And I’ve had a long relationship with Sun operating systems since my college years, first with SUN OS and later with Solaris (you may not believe me, but once I was shutting down a SUN OS 4.1.X SPARCstation with «shutdown –g 0» and I got a message like «does it have to be now?» before the screen got black. It was an Easter Egg, I guess…)
Once and again I work in cases where proof is compromised because there were no minimum auditing policies in place.
Microsoft Windows Server 2003 (the most common environment nowadays) does not consider this to be necessary, as it does not set these setting by default, but if you happen to have an incident, you will wish you had some auditing policies set.
If you are running on a Microsoft based typical network (Active Directory domain + Exchange), based on my experience I consider the following settings the bare minimum you should set to minimally know and proof in case of a misuse:
As the time is coming, security professionals try to find weapons to fight conficker.
Seems that Dan Kaminsky (yes, this Dan Kaminsky) tricked researchers Felix Leder, and Tillman Werner @cs.uni-bonn.de to work on a very useful tool if effective: a network scanner that can detect infected computers remotely by how they answer concrete network requests.
You can find the scanner here and read Kaminsky’s post here.