It seems like only a year ago Chad Sakac and Christopher Hoff decided to have a dodge ball competition at the end of VMworld 2010. This year the battle is back and bigger than ever. Not only do they have more teams and a bigger venue, but this year they playing for a good cause and all proceeds will be going to the Wounded Warriors Project.
The Wounded Warriors Project is a charity that helps aid veterans that were injured in the line of duty. Having never served myself, I can only imagine what these people went through on the battle field and I think we can all do our part in helping them off the battlefield.
If you want to attend the event, you can find more details here. Whether you are attending the event in person or not, I urge you to donate to the cause by clicking this link. While I myself will not be attending this year, I wish all teams the best of luck!
Whether you are configuring security for corporate compliance, or you want a central repository to manage user access, LDAP integration is becoming a major part of corporate infrastructure. Many of you may not realize this, but the VNX (as well as the older Clariion and Celerra) support LDAP integration, and after reading this blog post you will to. During this post I will cover the different steps (with pictures) required to set up LDAP authentication for VNX for FILE, BLOCK, and Unified.
*UPDATE* With the release of FILE OE 7.1 and BLOCK OE 5.32, All LDAP settings are now done in the Storage Domain section of Unisphere. Just follow the directions here to setup LDAP.
The name of an active directory group you want to give admin access to (no spaces pleas)
An existing administrator account on the VNX (and the root password for FILE)
Before we begin, you may want to login to the control station CLI as root and run the following command: “/nas/sbin/cst_setup –reset”. This command will regenerate the control station lockbox fingerprint and is usually required on systems where you may have changed the IP or name of the control station. I find it’s best to get this out of the way early instead of proceeding with configuration and finding it needs to be done later since this does not change any settings outside of the scope of this tutorial. More information on this can be found in Primus EMC260883.
Configuring LDAP on VNX for FILE
To start, we will need to login with an administrator account such as nasadmin/systadmin. You will start by clicking on the “settings” tab. On the right hand side you will see link to “Manage File LDAP Domain”, click it.
This section has several entries and is where we configure all the domain information. I have broken this down line by line as well as included a picture.
In this area you will put in the domain name. For this example, I used my domain “thulin.local”
This is where you put in the IP address of the first domain controller
This is where you put in the IP address of the second domain controller
Are you using SSL? If so, click the box. For this example I am not because I don’t have a certificate authority setup in the lab
389 for LDAP and 636 if your using LDAPS
Directory Service Type:
Here you get 3 options (default, custom, and other). Default takes most of the guess work out, but will only work if the service account and all the users and groups exist in the “users” container. The custom option allows you to specify the exact container for the service accounts and the user and group search path. Other is used for non active directory setups (such as OpenLDAP servers). For this example we are using the custom option
User Id Attribute:
This is the attribute that represents a user in LDAP, in 99% of Active Directory environments it is “samAccountName” and we will leave it as that here
This is where you put the distinguished name of the service account. For this example I just used the administrator account
If this needs explaining then I have a nice etch-a-sketch you should be using instead of a VNX.
User Search Path:
This is where you specify the path to search for users who will be logging in. If the user is not inside this path, they will not be granted access. I like to search the whole domain because a user cannot exist in more than one spot, and authentication won’t be effected by moving a user inside active directory
User Name Attribute:
This is the attribute to search by, we will use “cn” (aka Common Name)
Group Search Path:
This is just like above, but for groups instead. The same restrictions apply as well
Group Name Attribute:
Again we want to search by the common name
You want to search for the “group” class
We are searching for a “member” of a group
Once all the information has been populated, hit apply to save it (if you run into an error here, see the statement I made in paragraph 2 and start over). Once this is done we will need to test things, so hit the test button. If everything worked correctly it will say “Test Domain Settings. OK”. If you get “Bind Failed” error, either your IP, Distinguished Name, or password is incorrect. If you get a user or group error, check the search path and try again.
Now that we have configured our authentication protocol, we need to assign a privilege to an AD group. This is done in the in the user management area, so go back to the settings tab, then click on security, then click on user management, and finally “User Customization for File”. This area will present you with 3 tabs: Users, Groups, and Roles. Click on groups and then click create at the bottom. You will now be presented with a screen to make a new group and map it to LDAP.
This is a local name for the group. You can call it whatever you want because it ONLY exists on the VNX FILE control station. I chose the name LDAP_Admins
This is where you can specify a GID or just have the system auto select one. I use the default of auto select
This is where you give permissions to the group based on the role. Any user in this group will be given this role/permission level by default. For this example, I chose to give the users the Administrator role.
This is where you would select “LDAP group mapped” and put in the name of the group (in this case serviceAdmins) and the domain name (thulin.local). The group name can’t have any spaces but does support underscores.
At this point all the work on the VNX FILE side is done and it’s time to start on the BLOCK side.
Configuring LDAP on VNX for BLOCK
Setting up LDAP for Block is very similar to the way it was done on the Clariions. Just like with the File side, you will need the same 4 bits of information. To begin, click on the home button in the upper left, then click on the domain tab, and finally click on “Manage LDAP Domain for Block”. This will bring up a window where we can start configuring our LDAP settings. The block side requires you to setup individual domain controllers, and set all the settings on that one server, so click on the “add” button and we’ll get started. You will see several areas to input information and I will go through them:
This is where you put in the IP of the domain controller
389 for LDAP, 636 for LDAPS
There are two options: LDAP Server and Active Directory. Make sure to choose “Active Directory” if you’re using an AD environment (most of you will be doing this)
LDAP or LDAPS
This is where you put in the Distinguished Name of the service account just like when setting it up for file.
Password for the service account
Confirm Bind Password
Make sure it matches
User Search Path
Just like with File, this is where you would set the search scope to find your users
Group Search Path
Just like with File, This is where you set the search scope to find your groups
This is where you would upload a root CA certificate for LDAPS. Make sure it’s in base64 encoding
After you have put in all this information, click on the “Role Mapping” tab so we can map an AD group. Once in there you will want to select “Group” from the first pull down. Put in the name of the AD group (in this example I used “ServiceAdmins”), then select the Role from the second pull down (in this case I selected Administrator), and finally click “Add” to add the mapping. Once you have all your mappings, click ok and wait for the confirmation message. Then you want to do this all over again for the second domain controller. Once you have this all set, click “Synchronize”. And that is it!
Configuring LDAP on VNX for UNIFIED
Configuring LDAP for a unified box is no different than the Block and File side. The only thing you need to remember is that you need to do both, because the authentication will check your LDAP account against both the control station and the service processor. Both configurations will have to be working correctly to login properly.
Now it is time to test your LDAP login. Logout of Unisphere by clicking the door icon in the upper right. Open Unisphere again and this time put in your AD username and password. Be sure to select “Use LDAP” and click on “Login”. If all your configuration is correct, you will be brought back in to Unisphere. If you get an access denied message, check you username, password, as well as your user and group search paths.
I have included a youtube video published by EMC that shows exactly what I have demonstrated above.
I hope you enjoyed this tutorial and I hope this is the first of many. If you have any questions on what you’ve just seen, or if you have any suggestions for future write-ups, drop a message in the comments below.
As just about everyone on the internet knows, on July 20th Apple released the OS X 10.7 (aka Lion) to the public. $30 gets you a boat load of new features. One of these features is a completely rewritten CIFS client. For those of you who don’t know, CIFS is the protocol used for windows file sharing and is a big part of the EMC Celerra / VNX product. We have identified an incompatibility within our code. The good news there is a fix available for all DART code families (5.6, 6.0, and 7.0) and we are encouraging everyone to upgrade as soon as possible.
On July 14th, EMC has released ETA emc263721 (powerlink credentials required) to address this issue. An ETA (EMC Technical Advisory) is a way for EMC to notify customers proactively to address issues such as this before it happens in their environment. This details the problem and states the current fix. For this issue, we have put the fix into the following code levels:
• 188.8.131.523 or higher
• 184.108.40.206 or higher
• 220.127.116.11 or higher
• 18.104.22.168 or higher
• 22.214.171.1241 or higher
You can figure out your code version by running the following command from the CLI: “server_version ALL” (without the quotes). If your current version is the same or newer than the versions I listed above, then no action is required on your part and you are fine to deploy OS X 10.7 in your environment. If your code is below these levels I urge you to upgrade as soon as possible (especially if your environment contains a large number of Macintosh computers). To schedule an update, simply call EMC Support (800-782-4362), open a service request on powerlink, or speak with your local field resources.
With the recent announcement of the VCP5, the time to take the VCP4 is running out. On top of that, VMware is currently running a promotion that allows for a free retake if you schedule and take the exam in the month of July (promo codes “VCPTAKE1” and “VCPTAKE2”). This renewed sense of urgency has motivated me to get my certification now. I took the required course back in December, but without having a home lab until a few months ago, I barely had any exposure to VMware products. By taking the VCP4, I will be eligible to take the VCP5 without having to take a training course as long as I complete the exam by February of 2012.
The VCP4 exam consists of 85 questions that cover the changes version 3 to version 4 as well as a basic understanding of ESX/i 4, vSphere 4, and the related plugins and features. The exam is scored on a scale from 100 – 500 and a 300 is considered a passing score. With that being said, it is my understanding that this exam is no walk in the park. This will test your understanding of exact minimums and maximums, what hardware can be used and how it works, and how the software is installed, configured, and used.
Preparing for the exam:
The only thing that VMware requires to take the exam is to take the certified training course. This will provide the minimum amount of exposure that VMware feels is necessary to come with the certification. I took this class with my coworkers Mathew Brender and Tommy Trogden back in December of 2010. Now it is time to study for the exam. Besides the standard resources made available on the VMware website, I picked up 2 books. I am using the “VCP VMware Certified Professional vSphere 4 Study Guide” by Robert Schmidt as well as the “VCP4 Exam Cram: VMware Certified Professional” by Elias Khnaser. Both of these resources come with very detailed overviews of all the topics covered for the exam as well as a plethora of test style questions designed to give you a taste of what to expect. However I’ve found the questions on one book to be much easier than the other so I’m hoping the true questions fall somewhere in the middle.
I can combine this with my home lab to test things I’ve been reading and to redo the labs from the training course. My home lab is more or less based on the Baby Dragon from Phil Jaenke. However I only have one physical host at this time. Luckily, ESX/i can be run virtualized, so I can create a few virtual hosts to test the more advanced vSphere features.
Final thoughts before the exam:
At this point I am 10 days away from walking into the testing center. I have completed most of my reading from the two books, I am reviewing test questions, and I am trying to reconfigure the lab to redo some of my old excercises. I am always looking for new practice test questions and there seem to be plenty of them on the web (like the website of Simon Long). If you have any good links, please feel free to leave them in the comments and look for me on twitter after the exam to see how I did.
It has been less than a week since the Google+ Pilot launched and already people are getting hooked. Blog posts have popped up all over the place comparing this to Facebook, MySpace, and even Google wave. I will be doing none of that since I have never used any of those services.
If you are lucky enough to get into Google+ (and if you did, I hope you have me in your circle), you will notice the “less is more” style of layout. It a very simple 3 column approach. On the left you have your different circles (more on that later) and your chat. On the right you have, contacts you know, suggested contacts, and the ability to start a hangout (again more on these things later). The middle is your stream, and depending on who you are following, it can get a little crazy at times.
Google+ breaks down your connections into circles. As far as I know, you can have as many circles as you want. This allows you to group your friends and coworkers into different sections and restrict your posts. Adding a person to a circle is a simple drag and drop action. With this all setup, I can post something only to my close friends or family, without it going to everyone I know at work with. This provides a great amount of flexibility, and of course you can always post to all your circles or make something public for everyone to see. Like twitter, adding someone to a circle does not require an approval, however if they don’t have you in their circle, you won’t see any of the non-public posts. Also like twitter (or rather tweet deck), tagging someone in a post is as simple as typing a ‘+’ or ‘@’ and then typing out their name.
Photos are handled by Picasa, which should come as no surprise since it’s owned by Google. Depending on how many photos you have in an album, Google+ will arrange them in a nice mosaic as seen in the picture to the side. Photos can be included into posts in your stream and other Google+ members can be tagged as well as leave comments. This all seems like very standard stuff, and it is, but Google’s presentation seems to be very slick and is appealing to me.
The hangout is a way for members of Google+ to communicate through video chat. Hangouts, just like posts, are controlled by circles, so you only invite those you want. The video is all done through flash but the quality isn’t as good as Skype yet. What it does bring to the table is a web based video experience that allows for multiple people to talk together in a group, this feature is something Skype makes you pay for.
The other major section is Sparks. This is a sort of themed subscription area. You can use it to find public posts related to any sort of topic you search for. It will then create a stream filled with posts, stories /articles, YouTube videos, and other things that Google thinks is related to your inquiry. I haven’t played around too much with this feature yet because there aren’t a lot of public posts (this is expected in a limited field trial), but I expect this feature to be used a lot more in the future.
Google has also included an android app for this. This allows you to see your stream, manage your circles, make posts, and upload your photos directly from your phone. A word of warning, when you first install the app, it’s going to ask you to auto publish every photo you take. I suggest you say no to this. The app has the same simple UI that the webpage does, though not as feature rich. Clicking on someone’s name in a post does nothing at this point, where in the web version you can find info about them. I would like to see an iPad app as the mobile web interface is even more lacking.
With all these features, there are still a lot of kinks to be worked out. There seems to be no integration with anything non Google at the moment. I would love to see a post to twitter option, or a WordPress plugin to post to Google+. The task bar at the top will launch a new window when you click on anything outside of Google+ and then another one to go back in. At one point I had 7 windows of Google+ opened because of this. The hangout does not handle widescreen cameras that well. The image of a friend was squished / stretched to match the standard aspect ratio.
So how does one get invited to Google+? It’s simple, you need to know someone. On the first night, Google opened up invites to anyone a member chose. There was a simple button to include an email address. This however disappeared in several hours, but a second method has been discovered. This is detailed in a blog post by Susan Beebe and I have used it to invite several people. If you want me to try and invite you, leave your email address in the comments.
So, if you couldn’t tell, I like the service. I think it still has a way to go, but I’m told that Facebook and others started off small too. It has some great features and has the Google branding to help make it a great competitor. At this point only time will tell weather this is here to stay, or goes the way of Wave.
*NOTE* This review was done of my own and I was not compensated in any way by the manufacturer or any other parties involved with this product.
If you follow my twitter stream, you know that I recently acquired and iPad 2 for a good price. I am really enjoying the device and I am starting to look at this as a lightweight travel option instead of bringing my large 18” laptop with me on planes. With that goal in mind, I set out to find a case that would basically convert the iPad into a netbook.
There are several options out there each with their pros and cons. Most of the units that have a keyboard are north of $100 and seem rather bulky. It wasn’t until I had spent a week of searching, that a follower on twitter had mentioned the IMP38B and sent me a link to a YouTube video. At first glance, this case seemed to have everything I wanted, but you can judge for yourself below.
Ordering from New Trent was a pleasant experience. I had my doubts since I had never heard of the website, but my order went through without a hassle. Since this was a preorder product, I had received a personal email from the staff indicating when my item would ship, and they held to that date exactly.
The IMP38B is a hard shell case with a built in keyboard. It has a rubberized plastic finish on the outside as well as a reinforced swivel stand. This allows the iPad to easily tilt into both portrait and landscape modes. The hard shell case protects the back from scratches and has cutouts for every button and connector as well as the speaker and camera. The enclosed Bluetooth keyboard slides out to reveal 3 sets of grooves to lock the screen at different angles. Underneath the stand (and behind the keyboard) is a small compartment to store the charging cable for the keyboard (included) as well as an iPad usb cable (not included). When you are finished, the unit has a slide lock to hold it together and a smart magnet will turn off the iPad when closed.
• Case is solid and screen is locked in place and won’t fall out
• Sturdy stand that does not change angles when iPad is pressed hard
• Unit locks together for easy travel
• Keyboard is rechargeable
• Self-contained pocket for cables and stylus
• Keyboard has special keys for IOS functions
• Price! Only $50 shipped
• Very hard to detach iPad when needed
• Keyboard keys are slightly too small
• Doesn’t fold flat for holding in hands
The keyboard takes a little getting used to, but this unit acts as a very good adjustable stand for watching movies or playing games. On top of all that, you can’t beat the price they are offering this case for as it’s half the price of its competitors. All in all I think it is a great case for travel and use at home and I urge everyone looking for an all-in-one solution to consider this as an option. Let me know if you have any questions in the comments section.
Nothing spells summer in MA like a freak tornado, and when we aren’t hiding in the basement, we are usually out grilling. Today I prepared my Chicken Satay (aka Chicken on a stick) for Luigi and his family and friends. After enjoying a great Saturday afternoon cookout, I thought that perhaps other people would like to enjoy this dish as well. This is not your typical Thai chicken satay, but is a slightly different version. As I understand, this recipe was initially published in Times magazine in the early 1980s. My father used it every year for a cookout we had in the summer. It was such a hit with his cowerkers, that it was eventualy cached away on the Realtime Software Engineering Group Notes server inside Digital Equipment Corporation (DEC). From there, it was eventually passed on to me and today I pass it on to you.
Wooden grilling skewers
4 Chicken breasts
2/3 cup of soy sauce (Low Sodium)
½ cup of sesame oil
¼ cup of brown sugar
The juice from 1 whole lemon
2 fresh garlic cloves (pressed)
1 tablespoon of ground coriander
Pepper to taste
Mix all of the ingredients (except the chick and skewers) together in mixing bowel to make your marinade. Take the chicken and pound it flat till it is only ¼ inch thick. Once flattened, cut the chicken into strips about ¾ of an inch wide (and as long as you like). Once all the chicken has been cut, combine it with the marinade for at least 6 hours. The longer your leave the chicken marinating, the stronger the flavor will be. I would also recommend submerging the wooden skewers in water for the same amount of time. The water soaked wood keeps the skewers from burning away when they are on the grille. Just before you are ready to grille, you want to thread the chicken pieces onto the skewers, folding it back and forth as you go. At this point, you just go ahead and grille it. A few minutes on each side should be enough to cook it all the way through.
CAVA is one of the few parts of the Celerra/VNX that cannot be configured and monitored from the GUI. Most, if not all, of the information you need about cava can be found in the command line. Over the course of a few posts, I will start with a fully working cava setup, and then work backwards to break it so you can see common implementation problems and possible performance bottlenecks. In this first post of the series, I will go line by line through the output of server_viruschk so that you can understand just what the output is saying. For reference, this is the output I will be working with:
[nasadmin@UberCS ~]$ server_viruschk server_2server_2 : 10 threads started. 1 Checker IP Address(es): 192.168.1.101 ONLINE at Thu May 26 19:41:13 2011 (GMT-00:00) MS-RPC over SMB, CAVA version: 126.96.36.199, ntStatus: SUCCESS AV Engine: Symantec AV Server Name: cava.thulin.local Last time signature updated: Tue May 17 05:55:23 2011 (GMT-00:00) 1 File Mask(s): *.* 5 Excluded File(s): ~$* >>>>>>>> *.PST *.TXT *.TMP Share \\UBERCIFS\CHECK$. RPC request timeout=25000 milliseconds. RPC retry timeout=5000 milliseconds. High water mark=200. Low water mark=50. Scan all virus checkers every 10 seconds. When all virus checkers are offline: Shutdown Virus Checking. Scan on read disable. Panic handler registered for 65 chunks. MS-RPC User: UBERCIFS$ MS-RPC ClientName: ubercifs.THULIN.LOCAL
I will now go line by line starting with the first one.
10 threads started.
This is the number of threads for cava. Each thread represents a file that can actively be scanned. Cava will process up to 10 files at once to distribute across your available cava servers. Any additional files will be put into a holding queue until cava can get to them. This limit here is so that we don’t overwhelm the av software running on each cava server. This limit is adjustable by the support lab if it is determined that this will solve a performance issue.
1 Checker IP Address(es):
This line tells you have many cava servers you have defined in your viruschecker.conf file. In this example, I only have 1 server defined, but you should be running at least 2 servers at a minimum.
192.168.1.101 ONLINE at Thu May 26 19:41:13 2011 (GMT-00:00)
This line tells you the IP address of your cava server as well as the status and the last time we checked it. If that line says anything other than ONLINE, there is a problem with the connection from the windows server to the celerra and that server will not be used for scanning. More information on possible errors will be in a later post.
MS-RPC over SMB, CAVA version: 188.8.131.52, ntStatus: SUCCESS
This has 3 pieces of useful information. The first is the connection method we use to send commands to the cava agent. In this case, we are using the MSRPC protocol. Older clients may use the ONCRPC protocol, but this is not supported on 64 bit systems. The next part tells you the version of cava you are running. As of writing this, i am using the latest version (VNX Event Enabler 4.8.5). Like above where we reported the connection from windows back to the celerra, the ntStatus section reports the status of our initial connection to the windows server.
AV Engine: Symantec AV
This tells you the AV software we detected to use for CAVA. This can be helpful if you have more than AV engine installed on the client. In my case, I am using Symantec Endpoint.
Last time signature updated: Tue May 17 05:55:23 2011 (GMT-00:00)
This is the last time you updated your AV definitions
1 File Mask(s):
The number of file masks you have set to scan for. In this case, it’s just 1 mask.
This is the file masks you have in place. Any files that match the entries here will be processed for scanning. In this case i have *.* (everything with a . in it), but you can cut down a lot of traffic if your only scanning for certain file types.
5 Excluded File(s):
This is how many file exclusion filters you have in place. In this case i have 5.
~$* >>>>>>>> *.PST *.TXT *.TMP
These are the file filters i have in place. There are a number of files that AV software just can’t scan (like database files). I also have in place ~$* and >>>>>>>> to ignore Microsoft Office temporary files as they can become locked temporarily while being scanned and cause a loss of data in the office application.
This is the beginning of the UNC path that will be sent for file scan requests. This is determined from the CIFSserver line in the viruschecker.conf and will change depending of if you defined it with the ip, netbios name, or FQDN. The check$ folder is a hidden folder created just for CAVA. The only account that can access this is the one granted the virus checking privilege.
RPC request timeout=25000 milliseconds.
This is the amount of time we will wait for a file to be scanned before trying again.
RPC retry timeout=5000 milliseconds.
This is the amount of time we wait for an acknowledgement of each RPC command.
High water mark=200.
I spoke before about how we process 10 files at a time, and that addition files are put into a queue. The high watermark is when we allocate additional resources to cava to process through AV files faster. Hitting this high limit can cause a performance impact to your cifs servers, so try not to let the queue get this bad. In my case, i have set the limit to the default of 200.
Low water mark=50.
Just like the high watermark, this is a lower limit that starts to indicate that files are queuing up too fast. This won’t cause a performance problem, but is an indicator of a possible problem to come.
Scan all virus checkers every 10 seconds.
Every 10 seconds we will check the status of each cava server to make sure it’s still online and ready to take requests.
When all virus checkers are offline: Shutdown Virus Checking.
This is the action we will take when all the cava servers are not marked as ONLINE. This will shutdown cava so that files don’t continue to be queued and hit a high watermark. The other options is to do nothing (a setting of ‘no’) or to shutdown cifs (what i like to call paranoia mode).
Scan on read disable.
This means that scan on read is not enabled and that we are only processing scan on write. If scan on read was enabled, the cutoff date and time would be listed in this place.
Panic handler registered for 65 chunks.
This is mostly just for debug information and how many internal failures cava would survive before causing a panic. Every process on the celerra has a panic handler and this information is of no use to basic cava troubleshooting.
MS-RPC User: UBERCIFS$
Earlier i talked about how we use the MS-RPC protocol to connect to the cava agent servers. This is the username we will use for the SMB connection. In this case, we are using the compname of the cifs server for cava.
MS-RPC ClientName: ubercifs.THULIN.LOCAL
This is the FQDN of the cifs server we are using for cava which is used as part of the MS-RPC process.
This concludes my line by line explanation of the cava output. I hope you understand the output of cava a bit better. In future posts on cava Iwill talk about some of the different information you might see when there is an error as well as the output of the -audit option. Please feel free to ask questions in the comment section below.
Johnny Depp and Disney are back again in a 4th attempt to make money off an amusement park attraction. This latest installment is not quite the same as the other 3 since it does not have Will Turner (Orlando Bloom) or Elizabeth Swan (Keira Knightley). Instead we find Captain Jack Sparrow, Captain Hector Barbossa, and Captain Blackbeard all searching for the fountain of youth. The addition of Penélope Cruz worked nicely as her role made for many a laugh as well as an intricate part of the story. Along their travels they encounter the British and Spanish navies, zombies, and mermaids.
In my case, I went to go see the movie in IMAX 3D. IMAX movies have always been good to me. The large screen and high power sound system coupled with tempurpedic seats makes the experience enjoyable. However, the 3D in this movie seems like it was more of an afterthought. Time after time, it seemed like it was flat images just moved forward, instead of being filmed properly like the effect in the movie Avatar. This left me struggling to grasp the sense of realism that came with the amazing set designs and special effects.
Speaking of special effects, they were excellent. I really thought that the way the mermaids were done made them seem very realistic for a fantasy creature. The fight sequences were well choreographed and seemed to time perfectly with the environmental elements that were part of the surroundings.
All in all, the movie was good. The movie might not have much of a plot, but it’s full of the classic one-liners and expansive special effects that I have come to expect from the other movies in the series. It’s no wonder this movie grossed more than 350 million worldwide this opening weekend and I expect it to do even more over the coming weeks and the possibility of a 5th movie in the series.
So EMC World 2011 has come and gone. Now is the time that we can look back and remember. For those of you who were unable to make it live (or watch the webcast), a video of our #nerdherd has been posted on the EMC Community Network website. I want to thank Alan Zenreich for filming and posting the video. If you are one of the many people who prefer a more static image, EMC’s own David Elmes did most of the photography (including the photos of our meetup you see below). To see the rest of his work and other’s, check out the EMC World 2011 flickr stream. Once again, thanks to everyone that made this happen and enjoy the pictures and video.
The opinions expressed here are my personal opinions. Content published here may not have been read or approved in advance by my employer and does not necessarily reflect the views and opinions of my employer. This is my blog, it is not a corporate blog.