Happy Easter from the NSA to all Windows Server Admins

NSA logo

We raised the first alarm yesterday afternoon on social media, but it's official now. The first attack kits are out and the attack surface is huge.

Pardon my french, but it's appropriate this time: As of last night, every company who still uses Windows Server 2003 somewhere is 100% fucked. Not just now, but forever. And that's basically every larger enterprise I know. Newer Windows Server Versions up to 2012 must consider their internal network being breached. Same applies to Windows clients still running Windows 7 or even Windows XP.

Linux servers who still use the SMB1 protocol for network shares might as well be affected.

And that'll be only the tip of the iceberg, it seems. As a very first action, block port 445 in your firewalls, and update your Microsoft infrastructure to the newest available version, and yes, this includes client updates to Windows 10. 
As a second step, re-work your entire internal infrastructure, since you are already compromised. This is not about future attacks, it's about the present. 

Yes, this means that you might have to actually do something many decision makers always neglected to do, against good advice from all your suppliers. Those managers and the internal IT department leaders now receive the learning lesson for being slow, understaffed and low-budgeted. Best prepare for adjusting your business model as well.

And don't forget your cloud containers and virtual machines.

Further information about this particular issue can be found at Microsoft's Technet, where are warning already got published half a year ago: Stop Using SMB1


Evil Eye spyware hijacks user's webcam


We have discovered a trojan in the wild that hijacks cameras connected to a victim's computer to analyse product and brand logos in the camera's field of view. The victim will not be aware that this is happening, because the trojan disables the LEDs that would normally indicate the webcam being in operation. The victim's browser will then be manipulated in such a way that a hidden adblock component is installed. This enables the trojan to replace advertisement banners with its own ads, based on the user's apparent preferences. The cyber criminals operating the scam make money off of affilate fees tied to the replacement ad banners.

The trojan will upload images from the cameras to AWS cloud servers operated by the cyber criminals. These analyse the footage for known logos, presumably using sophisticated neural networks and machine learning. The server then sends relevant ad banners and embeddable JavaScript back to the malware instance running on the victim's computer. Legitimate ads are thus exchanged with ads benefiting the scammers.


Several versions of the EvilEye malware are currently circulating in the wild. A number of anti-virus vendors have been contacted by us and were supplied with the malware samples we discovered in our research. Current anti-virus software should already protect you against the current threat. When the malware evolves, as it has done several times during our analysis of it, this might change. We would recommend you keep your anti-virus suite up to date at all times. The same goes for software updates to your operating system and other software, of course.


EvilEye was discovered by independent security researchers Marcus Cole and Joe Miller of C-Sec Security in conjunction with a major web advertising firm that preferred not to be named in the report. A paper describing the technical details of the malvertising campaign is being prepared and will be available shortly. If you are an advertiser and were affected by EvilEye, you can contact the authors by emailing security@evileye-spyware.com – but please refrain from contacting us for technical or press inquiries, since we are a very small team and already very busy.


As has been noted by other researchers in the past, a security vulnerability tends to get overlooked these days if it isn't accompanied by a website, a great name and a nice looking logo. We don't like it either, but unfortunately, such is the world we all live in now.

Source: evileye-spyware.com

Adblocker sind Selbstverteidigung


Das sagt indirekt jetzt auch das BSI in einer heute veröffentlichten Stellungnahme anlässlich des angeblichen Cyber-Angriffs auf den Bundestag im Jahr 2015, der in Wirklichkeit gar keiner war.

Gut, angesichts der skandalösen Zustände, die damals in der Netz- und Serverinfrastruktur des Deutschen Bundestags herrschten, klingt ein vermeintlicher Cyber-Angriff aus Russland natürlich wesentlich strafmildernder als ein "Sorry, wir sind scheiße". Aber dass es sich nur um einen simplen Drive-By gehandelt hat, ist dann doch sehr krass.

Ein Drive-By bedeutet, dass jemand eine normale Webseite besucht, welche Werbebanner oder andere Inhalte aus einer anderen Quelle ungeprüft bei sich selbst einbindet. Dies geschieht auch meist über ungesichertes HTTP, weil Verschlüsselung wegen des Verlinkens einer ungesicherten Quelle aus einer anderen Domain in Webbrowsern für eine Sicherheitswarnung sorgt. Denn das Zertifikat eines Webseitenbetreibers gilt ja nur für die eigene Domain und nicht für jene der Werbeagentur, von welcher die infizierten Werbebanner eingeblendet werden. Und dass diese oft infiziert sind, ist ja auch kein Wunder; schließlich wird die Verschlüsselung ja gezielt abgeschaltet, um genau diese Sicherheitsprüfung der Browser umgehen zu können.

Hätte der Deutsche Bundestag damals einen Adblocker via Proxy eingebunden, oder auch nur zur Benutzung von Adblockern geraten, wäre die ganze Misere also gar nicht passiert. Aber das ging ja bereits politisch nicht, damals lief gerade massives Lobbying der deutschen Verlage gegen die Adblocker selbst.

Das Schöne ist, dass das BSI dies auch jetzt in der Stellungnahme schreibt. Zitat: "Eine der Hauptursachen für diese sogenannten Drive-by-Angriffe sind schädliche Werbebanner. Diese werden von unbekannten Dritten bereitgestellt oder von Agenturen vermarktet und werden häufig ohne Überprüfung oder Qualitätskontrolle in eine Webseite eingebunden. Auf diese Weise werden auch populäre und ansonsten gut abgesicherte Webseiten Ausgangspunkt von Cyber-Angriffen."

Wie kann man so ein Problem lösen?

Am besten, indem man die Verlage an die Leine nimmt. Bei Printanzeigen sind sie verpflichtet, die Anzeigen zu prüfen. Bei Webseiten indes nicht. Das muss sich ändern.

Ebenso muss ein Gesetz her, dass Anbieter von Software oder Medien für durch sie verursachte Schäden, absichtlich oder fahrlässig, auch haften müssen. Gibt es schon längst, denken Sie? Da haben sie schon recht, das gibt es eigentlich schon überall - aber nicht in der Softwareindustrie oder der ICT.

Macht die Verlage, Webseitenbetreiber, Softwarehersteller und Internet-of-Things-Hersteller für ihre Sicherheitslöcher haftbar, und dann wird das Thema Cybersicherheit schnell vorangetrieben werden können. Ein Softwareproduzent muss für Sicherheitsprobleme ebenso haftbar sein wie ein Hersteller von Autos für defekte Airbags. Die Zeit, in welcher die Risiken zugunsten der Industrie und Verleger auf die Verbraucher abgeschoben werden, muss allmählich zu Ende gehen.

Link zur Stellungnahme des BSI: hier klicken

Another severe Linux security flaw went unnoticed for seven years

Linux Security Flaw

Affected has been the High-Level Data Link Control (HDLC) kernel module. 

It is the fourth severe Linux bug within the last 13 months, after the unbelievable glibc desaster (unnoticed for 8 years), Dirty Cow (9 years) and the kernel-code execution bug which allowed low-privilege processes to get full root accesses (11 years without getting fixed). Each one of them had the potential of taking a huge company like Microsoft down, if it happened to them.
And really, it's not funny anymore. Never has been in the first place.

Do you still believe in the myth that open source is a key ingredient for secure systems and services, and  that closed source is the devil? I'm sorry, but you are misled. This has rarely been more than an excuse to replace licence costs with cheaper manpower by a certain type of employers, but in the last year it became pretty obvious that open source does not increase security at all. The reason is simple: Nobody audits the fundamental codebases of the Kernel and supplemental software components, because this would be too expensive. And so such bugs go unnoticed by the community, while those who research the kernel code for zero day exploits also have zero interest in sharing their insights with anyone who doesn't pay big sums for the disclosure.

Actually, an opposite awareness begins to show its ugly face: Open Source increasingly becomes a security problem. For example, Amazon Web Services is not the (by far) most compromised Cloud service in the world because they are big. A more serious reality is that standard Open Source applications running on standard kernels and cheap Intel hardware has become one of the most relevant attack surfaces due to code profiling tools like Valgrind and non-isolated container environments like docker. Big attacks like the Playstation Network hack (Amazon) which had caused tens of millions dollars of damage would not have been possible without the use of open source on the server.

Even the kernel development teams have realized this, and switched from a disclosure policy to a non-disclosure approach by keeping any exploits they heard about "secret" until they fixed it. Except Google of course, but Google usually only acts as an advocate for free security information as long as it concern security issues their main competitors in the Cloud or phone market have.

Long story short: If you are in the security business (or any related one), please don't talk about Linux being more secure than anything else, especially if you don't have the funds to perform serious in-depth audits of each component you use, from the Kernel up to the application layer and openSSL. There's a reason why companies like IBM, Oracle or Microsoft employ hundreds of experts and academics who focus on nothing else. And it has an effect that distributors like Red Hat, Canonical and SUSE don't.

There's also a reason why, for examples, financial, military and other highly critical environments still use so-called "end-of-life" platforms like Red Hat Enterprise Linux 3, simply because an audit costs tens of millions. And don't forget that every involved supplier, from the developer to the Linux software distributor, strictly rejects any liability, or even Service Level Agreements in case an urgent hotfix will be needed within a fixed MTTR. 

Be smart, not religious. Use any technology you want, but use it where it is appropriate. And be aware that the times when attacks and defense happened mostly on the application layer are over.This will decrease your risk of being taken out of business significantly.

You can find the fixed HDLC kernel module code here: git.kernel.org. Have fun compiling your images, again.

US Government pays to keep security holes open

US Government pays to keep security holes open

So the cat is publicly out of the bag now, after WikiLeaks published the CIA documents. They contain proof that the CIA and NSA, which are U.S. government agencies, keep this in mind, pays software manufacturers to NOT fix exploits and keep them open. And we also learned from those documents that they successfully compromised the encryption of services like WhatsApp.

This problem is massively bigger when you think about it for a second. Maybe you begin to understand why zero day exploits have become a real market for shady security companies. The Cisco exploits for example, which were zero day exploits and went unfixed for three years, raised some eyebrows last year, but that was it. The public seemed not to care enough. But if you're a little bit more proficient in this area, you knew that this was not a coincidence, nor the only one. Non-disclosure of zero day exploits is the standard procedure, simply because you can earn millions with a good one, especially in attack surface scanning tools which test huge networks with millions of IPs for unpatched software and similar weaknesses.

And now think about this once more. Where did the CIA get the exploit from? How top secret is such an exploit really? After all, even the business transaction for keeping an exploit alive has been documented and it's now official that the U.S. government doesn't solely rely on their own agencies. Which means that this information is available outside of the agency itself, too. 

Keeping an exploit like this deliberately open is selfish, reckless and negligent to the highest degree. This turns an exploit into a security weakness which can (and in too many cases also will) get exploited by third parties. Most of those weaknesses focus on hardware implementations (the notoriously insecure Ivy Bridge architecture which Intel produces is not even a secret only insiders know anymore) and, of course, encryption implementations in standard components like openSSL.

Don't just rely on standard security procedures and algorithms only. SSL, PGP, PKI, all those standards are basic tools which should be used in basic setups, that's true. But that's all they are: basic. And basic is not enough if you have sensitive customer data, intellectual property or similar critical information you need to protect. Those tools reduce the attack surface of your enterprise, but not as much as you might think or even need.

For example, if you have sensitive data somewhere behind a web frontend or an application server, but do not encrypt your databases and filesystems with different certificates, or maybe not even this - do you really think your business is safe?
If your firewall or IDS runs on a system without advanced internal security mechanisms, or even on standard hardware with USB ports - do you really think it will do its job all the time?
Do you still have unencrypted data flows in your network?
Do you still work with IPv4 because a business-critical application doesn't support IPv6?
Do you not use IPv6, because it might turn your network into a notwork?
Do you store information at an external cloud provider like Amazon, who doesn't implement proper security and mandatory access controls on all layers of his architecture?

You shouldn't, definitely not. IT Security (aka Cybersecurity) is basically Risk Management and as long as you can measure your attack surface and still convert it into money, then the attack surface is still too visible and you're in trouble.
Security is not something which costs money.
Security is not an insurance.
Security is not an option.
Security is the air your business needs to breathe. Take it away, and your business model might run into fatal problems.

So do not rely on standard encryption methods. They only make attacks more expensive, but you don't know how big the impact on the attacker's resources will be.
They can render attacking your enterprise and your information too expensive, yes. But it's also possible that they don't.
The attacker will know. You won't.

Instead, tighten your infrastructure, your server installations, your hardware. Don't use Intel architectures for really critical data. Don't use ssh-agent. Don't use sudo. Don't use filesystems which will not be encrypted by certificates. Don't rely on standard UNIX permissions only. Don't rely on hardening only, especially when you forget to start over with it after applying a patch. Don't deploy identical security-related settings or even certificates with config management tools like puppet, every server must be secured individually.

Instead audit your servers, on-the-fly and constantly during normal operations.

Classify your information properly and ensure that only certified users, processes, storages, I/O operations, connections and devices can access them by applying Mandatory Access Controls. No, SELinux is not enough, it only addresses users and processes.

Don't even think about using Docker for applications which work with sensitive data. What you want is full container isolation on operating system and hardware level.

Don't use software or hardware which utilizes CPU caches for encrypting in multihomed or even cloud environments.

And if your operating system supplier, your hardware vendor or your cloud service provider do not support this, then don't use them for storing any data which must stay confidential or even secret. It's as simple as that. 
There is only one alternative to treat highly classified information differently: Keep it as far away from networks as possible.

Don't risk losing control over your classified data and being taken out of business. 

Source: iOS Exploits Data

PHones are potential CCTV cameras

Last night, CNN published an article in which they describe a scene which hopefully will never take place like this at your offices or homes: The U.S. president, his aides and Mr. Abe from Japan sitting in the dark over classified documents in a candlelit room, pointing their cameras at the papers which, directly or indirectly, describe the U.S. surveillance capabilities of the Chinese Sea  for providing enough light to read them.

What could possibly go wrong?

The answer to this question: Everything. XKeyscore's existence has been proven, and there are other governments out there with similar technology, especially in Eastern Asia.

Don't try this at home, or at your offices. Always make sure that classified information stays confidential.

I have seen companies which accidentally broadcasted internal information on television, just because they forgot to clean their whiteboards before they let a camera team do their filming.
Clean up your desks when you leave the office.
Lock your PC's screen when you leave your desks.

You would never leave your car keys in your car unattended while parking on the street, going shopping or only paying the gas bill within the gas station, right? So don't do this with your workstation and your paperwork either. You can always replace a stolen car, but not your career, or the future of your co-workers. And keep your smartphones away from any confidential information, the entire phones, not just the cameras. It will also have the benefit that no ringing phone will disturb your meetings and conferences.

The patio was lit only with candles and moonlight, so aides used the camera lights on their phones to help the stone-faced Trump and Abe read through the documents.



Privacy Shield Framework

Just in case you are still somehow convinced that the Privacy Shield agreement between the European Union and the United States would protect the data of European citizens in any way, the time for a reality check has come. 
U.S. Magistrate Judge Thomas Rueter in Philadelphia ruled on Friday that Google has to transfer emails from a foreign server to a server located within the United States, so FBI agents could review them locally as part of a domestic fraud probe.

This does not increase the trust into the integrity and/or competence (you choose) into European politicians, or does it? So remember: Only save data in a cloud, or transmit it through the internet, when it is completely encrypted using PKI and only you own the private key. For us at ICT.technology, this is a mandatory procedure for all our customers and their projects.

Read more at Reuters http://www.reuters.com/article/us-google-usa-warrant-idUSKBN15J0ON

Oracle Solaris moves towards Continous Delivery Model

Oracle SPARC & Solaris roadmap

The now officially (not just for partners) released Oracle SPARC and Solaris roadmap, quite contrary to the recent FUD which was spread in the Internet at the end of 2016.

Oracle also announced Solaris 11 premier support until 2031, extended support until 2034.

Solaris moves to a Continous Delivery Model, basically similar to Microsoft's strategy: They will releases updates more often as dot releases, instead of rather disruptive major releases.  

Oracle Cloud has been moved entirely to the SPARC M7 platform, running on SPARC Model 300 servers.

Read the full details here: https://blogs.oracle.com/solaris/entry/oracle_solaris_moving_to_a

DHS memorandum header

This article might be nearly two years old, but it has never been more relevant than now. I strongly urge everyone, especially my customers and family, to always encrypt their PC with a certificate. Yes, a certificate, not a BIOS or bootloader password, those are useless and easily to bypass. This should be mandatory for any business devices, especially during the times where BYOD is a hype. And don't forget that a certificate is equally useless if you carry it with you or even use an empty or easy to guess passphrase. Always keep the USB stick with the certificate separate from your own baggage or hand luggage.

A forensic copy of a HD (not SSD) takes about an hour, a SSD is only a matter of minutes. So if they ask you for your laptop and take it out of your reach, you must treat it as being compromised. 

Read more details on Vice.com

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer