You may have noticed that a lot of vendors around the IT community help to recognize the ~767 people that have been awarded VMware vExpert in 2014. This blog post is to give a shout out to those who were giving something to those individuals chosen as vExpert. In scouting around the internet I found these juicy nuggets and I will keep the blog updated when I find others. If you know of one not listed, please send a message to @thevbox on Twitter and I will investigate and add it if legitimate. (Yes, I will check it) J I hope this helps.
Pluralsight’s special gift to vExperts is one year of free, unlimited dev and IT training. That’s 1,400+ courses on the latest technologies including topics ranging from virtualization and security, to SQL Server and .NET. There are hundreds of topics to choose from in our library, and you’ll get access to all of it.
Login Virtual Session Indexer (Login VSI) is the industry standard load testing tool for virtualized desktop environments. Login VSI can be used to test the performance and scalability of VMware Horizon View, Citrix XenDesktop and XenApp, Microsoft Remote Desktop Services (Terminal Services) or any other Windows based virtual desktop solution.
Starwind offers StarWind V8 for Hyper-V and VMware that you can use for your home or work lab! StarWind V8 delivers high-performance fault-tolerant SAN at a fraction of the cost and complexity associated with traditional SAN-based storage infrastructure and includes three stand-alone products that share a common engine, with efficiency proven by eight generations of software and hundreds of loyal users.
Datacore SANsymphony-V is a software solution that runs on any x86-64 server. Runs in the data path between application servers and any makes/models of storage hardware. SANsymphony-V operates in 2 node configuration leaving no single point of failure.
Veeam is giving away FREE 180-day NFR licenses for 2 sockets of Veeam® Backup Management Suite™ for VMware or Hyper-V for your home or work lab.
Veeam Backup Management Suite is built from Veeam Backup & Replication™ to deliver the data protection, visibility and control you need for your VMware or Hyper-V environment.
Solarwinds offers SolarWinds® Virtualization Manager software. SolarWinds Virtualization Manager delivers integrated VMware and Microsoft Hyper-V capacity planning, performance monitoring, VM sprawl control, configuration management, and chargeback automation; all in one awesomely affordable product that’s easy to download, deploy, and use.
Symantec offers NFR license of Symantec Backup Exec V-Ray Edition for vExperts.
Symantec Backup Exec 2012 V-Ray Edition is designed to completely protect your VMware or Hyper-V virtual environments. With advanced host-based virtual protection and integrated deduplication, the V-Ray Edition gives you worry-free backup and recovery for virtual machines and virtualized applications. And Backup Exec gives you granular recovery of entire virtual machines, single files, Active Directory objects, Exchange emails, or SharePoint documents from a single-pass, host-level backup.
The pioneer of smart VM-aware application-oriented storage supporting virtual workloads has offered extremely nice Nike polo shirts with your Twitter name on the front for several years now. For each year that you have been a vExpert you get an embroidered star. This year’s shirt has an unrevealed surprise as well. The blog was posted here.
PernixData creates a hypervisor software that aggregates server side flash across the enterprise to create a scale-out data tier for the acceleration of primary storage leverage third party flash within the host and is offering free NFR Licenses for PernixData FVP to 2014 vExperts.
If you are VCPs, VCIs, vExperts, MVPs, MCPs you can protect 2 sockets and 2 physical servers with Unitrends backup solution. Note that you have also phone, e-mail and online support with 1-year expiration (renewable along with certification).
VSS Labs is releasing a new product, called vCert Manager, which will vastly improve the SSL certificate management experience for VMware customers. VSS Labs has a free NFR license available for home lab usage, with a special bonus for current fellow VMware vExperts. The tool will discover all of your components, even if they exceed your license, but you can only actively manage the number up to your licensed threshold (2 vCenter, 10 hosts). For more info see this post.
Devolutions is offering vExperts a free license of their flagship remote desktop management solution. Remote Desktop Manager lets you centralize, organize and securely share all your remote connections, credentials, passwords and documents throughout your entire team. Learn more about Remote Desktop Manager on SlideShare.
If you are interested and would like to obtain the free license, please send an email to email@example.com along with a link to your VMTN profile. Devolutions will not add you to any email lists, databases or share your information
Royal TS provides easy and secure access to your remote systems. Unlock the power to remotely manage your systems on multiple platforms!Royal TS is a tool for server admins, system engineers, developers and IT focused information workers who constantly need to access remote systems with different protocols for use on Windows, OS X, iPad & iPhone and Android
People are prone to regurgitate verbally what they know, especially those who work in the IT industry. This becomes very scripted and repetitive in conversations, but things are changing so fast that by the time we really truly learn them, in most cases it’s outdated and either partially or non-applicable. It won’t be too long before this blog itself is old news and/or outdated. What I want to briefly touch on in this BLOG are buckets.
Most IT people will define storage by what they know, and is where I use the bucket analogy. If you are an EMC, NTAP, Hitachi, IBM, etc. array you will be placed into traditional or legacy storage. If you are XtremeIO, Pure Storage, EF Series Netapp, Nimbus, etc. you are put into the all-flash storage bucket. Manufacturers who leverage flash and disk in the same storage, those in IT will immediately lump you into the hybrid bucket. We have seen what hybrids have been, and have not been, able to do from our own experiences and thereby formed our opinions. The Tintri experience though has been different among its customer base…why?
Tintri has designed a storage appliance from the ground up to completely make the impediments of traditional storage vendors completely go away. Tintri really is in a different bucket. This is not a storage array retrofitted with flash to your detriment; this is a flash array that put disk on the back-end for your benefit. Although this may seem very counter-intuitive, let’s take a closer look at the benefits of placing spinning disk on the back end of a flash array.
Adding disk to a flash array provides…
Definitive Datastore Size – Dedupe and Compression rates are very hard if not impossible to predict accurately. For example, some workloads like databases don’t dedupe very well, and encrypted workloads don’t dedupe or compress well either. All-flash vendors are providing a small amount of RAW physical SSD and after RAID it becomes even less. They are highly dependent on Inline Dedupe and Compression to obtain the glamorous benefits that the array provides. The real problem is that you don’t know what you are getting. Some of these products don’t even do compression, only dedupe, and some are post-process adding additional overhead. Some products have more physical storage in their array however; a capacity licensing is standing in your way. There are even more overhead concerns with many of the products who are doing a tremendous amount of garbage collection in the background. Here is some interesting reading:
- http://www.violin-memory.com/blog/garbage-collection-xtremio-fiction-fiction-part/ – Violin explains some nuances about the XtremeIO product, and there is even a part 2!
- http://www.purestorage.com/blog/emc-xtremio-launch-analysis-what-we-liked-and-where-we-xpectedmore/ – even Pure jumps in against XtremeIO here, but we all know that you too Pure convert to post-process at ~70% capacity.
Adding disk to a flash array will give you a definitive size of your flash array and let you know exactly how much that you will be able to place on that array no matter what your actual deduplication and compression rates are.
Capacity for Stale Data – No all-flash product relying on deduplication and compression for capacity can give you the exact amount of data that you can definitively store upon it. By adding disk to the back end of an intelligent flash solution, Tintri can smartly separate the data (@8k) which doesn’t generate IO (cold) and place it in HDD for capacity, and facilitate the needs of IO producing data from flash (hot or warm data). Who wants to store dead data or rarely accessed data on expensive flash? A 90/10 rule is observed here. 90% of your typical workload’s IOPs are generated from just 10% of that workload’s storage footprint. Look back and reflect on the reasons that we initially virtualized. One of those reasons was that a massive amount of compute was completely underutilized, most servers set there and did nothing most of the time leaving stale compute and memory resources behind. With the advent of multi-core processors we were able to switch to hypervisor “software that emulated hardware” solutions to consolidate our server footprint. In the same respect, adding disk to an all-flash array also helps us reduce the storage footprint while maintaining performance by placing the data intelligently on the appropriate drives.
A Better Cost – Flash today is expensive and when there is no difference between the cost structures of HDD & SSD, then you will see an all-flash Tintri. Until that time, we know definitively that you don’t have to be all-flash to get all-flash performances…if your storage is intelligent about the data which is stored upon it. DO NOT confuse intelligence about how storage uses its own hardware (storage placement, writing in longer strips, etc.) as we all have that (again, this is also table stakes). I am speaking of intelligence around the data, specifically the VM. I will be writing a BLOG soon around storage intelligence specifically, so we can table that for another time.
Unlike the rival products from OEM incumbents that currently dominate the storage market, Tintri’s VMstore device is not a disk array that has been retrofitted with flash. Instead, Tintri’s storage system was designed from scratch to combine disk with flash storage, which allows the company to make very credible claims about superior performance and efficiency. And, almost uniquely for a shared storage system, Tintri’s product is managed at the level of individual VMs. Tintri can present strong arguments about the technical superiority of its product compared with conventional storage especially in regard to the efficient blending of flash with disk, and VM-level management.
Writing this BLOG reminded me of the movie Dirty Dancing when Patrick Swayze stated, “Nobody puts Baby in the corner!” To that effect I say, “Nobody puts Tintri in a bucket!” Head-to-head comparisons against ANY competition, whether currently Fiber Channel or NFS,
always will show the superiority of storage purpose-built for virtual machines.
Today’s announcement of the introduction of RedHat RHEV support into the Tintri VMstore is just the beginning of a game changing year for the company. Tintri is very excited to partner with both VMware and now RedHat to further advance the virtualization of the datacenter. “Tintri is the only hypervisor agnostic storage platform with VM-awareness and adaptive learning capabilities to support mixed workloads–servers, VDI, dev & test concurrently on a single Tintri VMstore.”
The VMstore is an industry marvel that delivers intelligent, responsive storage that provides visibility and control at a level that dynamically adapts to the individual virtual workloads. This is a complete departure from all legacy storage systems that you need to physically map and lay data down to specific volumes or LUNs. The art of what they do as a solution is creating a similar abstraction layer for storage like VM did for physical servers. The Tintri approach is transformational in the storage arena and fundamentally changes the operating philosophy (people, cost and processes) on how storage is managed across the virtual landscape.
What Tintri is doing with storage is creating something that is very smart about the data which resides on it, so much so, that it can see, learn, and adapt to the very applications that we are running within our VMs. Much has been said about how flash has become the disruptive technology in the storage industry these days. Flash is great, but it’s only one of the ingredients to making storage more useful to end-users. Far more important is whether flash-powered storage has the intelligence to make applications run better. This is where the Tintri application-aware storage is in a league of its own.
Tintri is taking RedHat into a new and different era. By combining the RedHat RHEV functionality with the power of Tintri’s own CloneVM, SnapVM & ReplicateVM technology, what you have is something very powerful that tells an even more powerful story for the RedHat product and its feasibility in the enterprise.
Using API functionality, the VMstore can talk directly to RHEV-M to leverage all the same bells and whistles that customers get today with VMware. Quality of Service (QoS) is at the appropriate level inside this purpose built storage and will follow the VM/vdisk to whatever VMstore it might be Live Migrated. Nothing changes within the Tintri console as you can see the VMs look alike, but you can distinguish them from each other by adding the ‘Hypervisor Type’ informational tab to the top of the interface.
Having the correct abstraction point for our virtual environments and going forward into public and private cloud architectures will be crucial to how we visualize our environments, how fast we can pinpoint and react to issues that arise, the agility by which we can makes logical IT & business-based decisions, and to keeping our footprint smaller and more simplified.
Simple, minimalistic, intelligent, proactive are not words often used for storage, especially storage used for virtual workloads. These are the mantras by which Tintri has engineered the best product in the industry for VMs. Until we break free from the rigid thinking concerning the placement of our VMs, we will never truly realize any of those in our datacenters.
There is an old saying, “If you keep doing the things that you have always done, expect the results that you have always gotten.” To create new efficiencies in our organizations and achieve a level of virtualization as it was meant to be experienced, a new technology must be introduced… meet the Tintri VMstore.
- Press Release –
- Blog post – http://www.tintri.com/blog/2014/04/storage-for-multiple-hypervisors-with-tintri
- At a Glance –
- The Red Hat entry for the Alliances page: http://www.tintri.com/partners/alliances
- The Red Hat detail page: http://www.tintri.com/alliances/red-hat
As many of you know I am a technology geek through and through. I love to read others thoughts and opinions on their blog sites, and I get a real kick out of some of those write-ups. I agree with some of those posts, while disagreeing with others. What I am about to write I personally regard as fact, but as always, feel free to disprove me by leaving me feedback/comments. In full disclosure I am an Engineer/Evangelist for Tintri, which I do not hide. I believe Tintri is in fact the next great storage company for very valid reasons.
In today’s storage world there are so many existing players, and many of them are new players. One of my larger customers refers to them as ‘ankle biters’ and Tintri, at one time, was considered one of these ankle biters! Flash technology is quickly changing the game for storage, but it is well known that it isn’t a magic band-aid, or cure all. In fact, flash alone is only part of the solution. Flash technology definitely helps us with performance related problems, but doesn’t do much, if anything, for the traditional headaches left behind by legacy players. Here are some things to consider before investigating and purchasing your next storage array, for your ever expanding VM environment.
- SSD – SSD Arrays and all of the components within them (multi-core CPU’s, memory, the type of disks they use, an all-flash log-structured filesystem software, etc.) here in 2014 are all commodity hardware. The fact is that most everyone is using commodity hardware. If a manufacturer is coming at you by selling his hardware type, news flash, we all use the same stuff. None of it is game-changing…its commodity (one exception that I know of is Violin, and we see how that is turning out). My answer: What if…your storage could do more for you than simply store your bits and bytes efficiently?
- Deduplication & Compression – This is quickly becoming table stakes these days. Everyone either is doing it or will be doing it soon. Some only provide dedupe and not compression (really people?) and therefore have no effect on things like databases thus provide half of the ancillary value of others. Other manufacturers do this post-process or they switch to post-process at a certain capacity and it’s a losing battle since it can’t catch up. Doing this at all times on ingest is a much cleaner method. My answer: What if…deduplication and compression occurs on the data which you really care about (Active Working Sets) to take further advantage of SSD space, while your non-IO producing data resides in capacity based storage; self-adjusting on the fly?
- RAID Levels – Manufacturers need to stop pretending that this is some major benefit to the end users performance. We all know this is for the benefit of disk parity and data integrity. RAID originally was for spinning media that was bound on IOPS, SSD has changed this! Example: RAID 10 is no longer needed for performance reasons. At the end of the day, users will not wipe their brow and say to themselves, “I am so glad I had that type of RAID level when I was troubleshooting that VM. You can make up all the new labels that you want to for RAID, just ensure the integrity of my data please. My Answer: What if…you never had to worry about RAID levels, LUN masking, Volume layout, queue depth settings, SSD capacity vs. HDD capacity, drive types, zoning, etc. ever again?
- IOPS – Who cares if you can do a 100k – 300k – or a million IOPS in your footprint. If your VM environment only does 30k or 50k IOPS at peak load, what does placing it in a storage that does 300k IOPS do for you? Hint: Nothing. The underlying storage infrastructure will not change your workload, it will only facilitate it to perform at its best. What is needed is an adaptive learning filesystem that always examines what performance is needed on your VMs. My answer: What if…every VM got exactly enough flash (IOPS), every time, on time, and your VMs performed optimally and smartly from a storage perspective.
- Caching and Tiering Mechanisms – These types of flash architectures (caching) can overflow the flash buffers causing very painful experiences. It can also be costly as the first thing the vendor wants to do is install a larger controller to combat the issue. Caching and Tiering mechanisms move hot bits and bytes on a timed basis and transfer them from spinning disk to flash. The real problem is that they cannot react real time to the VM, thus creating a window at which the application is not performing correctly, causing helpdesk calls to be generated, IT staff to start reacting, and many cycles to be spent chasing a problem, but never knowing what the underlying issue is. The issue may go away after the hot blocks are finally transferred ( from hours to a full day) and then your staff is left chasing a ‘ghost issue’ that is no longer happening. Management though always wants to know why, so admins spend there time trying to communicate an explanation. My answer: What if…your storage could adapt on the fly to your VM workloads in an intelligent way?
I present to you the new table stakes for storage. This is where you should begin, and finding value lies well beyond this starting point. So beyond these things, what does your current storage, or the next vendor that you are looking at, provide you? If these are the only benefits that the manufacturers can provide you, it wasn’t designed with your needs in mind. So if you architect flash storage correctly, what does it look like? Puzzling isn’t it? During this Simplicity Series, let’s begin to understand what the rest of this puzzle looks like when designed with virtualization and the user in mind, because this creates a solution and not just a product.
I am reblogging a post from a good friend of mine, Scott Sauer. This was an excellent post that I didn’t get around to writing an original blog for and he did a fantastic job. Reblogging with his permission.
Last week (3/20/2014) Tintri announced our vSphere web client plugin that brings the familiar performance metrics that are found in our VMstore web user interface, to the vSphere web client. This plugin is great for those customers that have begun to adopt and utilize the VMware vSphere web client (the non C# windows based client). As a reminder, the vSphere web client is where all of the new VMware capabilities and management functionality will be integrated going forward. As of today (3/24/2014) the Tintri vSphere Web Client plugin is now available in tech preview mode on our support portal. This new plug-in is a no cost item for our customers, so please feel free to download and install at your convenience!
A Hidden Gem
The Tintri integration is a nice win for all of our customers. The rich data we provide back to the web client is really a game changer when it comes to performance troubleshooting, data protection (per VM) and capacity planning. One of the coolest features that our development team included in the new plugin is the ability to apply our NFS best practices to your ESX hosts with the click of a button.
Below you can see I have selected a Tintri datastore in the web client and have right clicked the object to enable the Tintri menu option to appear:
After selecting the “Apply best practices” menu option, I am now presented with a list of ESXi hosts that have access to this particular datastore. In my lab/demo environment, this happens to be one ESXi host but in a normal production environment, this would be the entire cluster where you could apply these settings to all of the ESXi hosts at the same time.
Notice where I have the arrows pointing in the first 3 columns compared to the following 3 columns. There are no gray italicized “match” values present in the selections. This indicates that the ESXi host we are looking at is not running in our best practices configuration. As a side note, the Tintri vSphere Best Practices documentation can be found on our support portal.
Let’s set the correct best practices for this particular ESXi host:
Step 1, select the button “Set best practices values” at the lower left hand side of the screen. Step 2, notice the values have now been corrected on the ESX host in this particular example, and the italicized gray “match” value is displayed in the first three columns. Step 3, select the “Save” button in the lower right hand corner of the menu to apply the values we have just set automatically to the above host. The ESX host will need to either be rebooted or have the management services restarted to re-read the new values that have been set.
This little hidden gem is a nice added feature for many customers because it can quickly validate your cluster settings, to ensure you are getting the best performance possible when running VMware vSphere in combination with Tintri. VMware vSphere Host Profiles would be another great place where you could apply the Tintri NFS best practices and automatically apply them to your hosts/clusters. Many customers are not running vSphere Enterprise Plus licensing and do not have access to the Host Profiles functionality. The Tintri plugin now provides an alternative method to accomplishing a simple approach to applying our best practices to your environment.
As much as I love doing this manually (sarcasm detected) our friends over at VMware have ingeniously automated this with a tool that includes customizable templates to enable or disable the Windows system services and features, in accordance with recommendations and best practices for VMware. It also allows you to analyze your master images that have been already created. This will come in really handy for those who would like to do this quickly, and it saves me from blogging about the old way and each individual optimization and what it meant! Easy Peazy!
Having been to VMWorld and Citrix Synergy before I did not know what to expect from (ISC)2 Security Congress, which apparently is simultaneously dubbed ASIS 2013. The feeling that I get overwhelmingly is that this is much less about the IT portion of security (ISC)2 and more about the physical aspects of security (ASIS). I am attending to keep my ISC2 CISSP certification active, and because of the CPE credits that you get for attending this put me at requirements listed for maintaining that certification; which has to be renewed every 3 years. So far this is a much less grandeur than the conferences either Citrix or VMware puts on.
Strategic. Smart. Secure. – This is the theme for this year’s conference.
STRATEGIC – With the live songs “Taking Care of Business”, “Firework” and “Don’t Stop Believing” being sung, the opening scene were painters creating a scene with the word STRATEGIC. [Being emphasized using storylines from the life of Winston Churchill] This was followed by Steve Surfaro, BDM for Axis Communications giving a live speech concerning being strategic.
SMART – More live songs… “If You Stop Me Up”, “Why Haven’t I Heard From You?”, “Anyway You Want It” and a painted scene for the words SMART. This was followed by a very short talk from James Antonelli VP of Guardsmark, LLC. This was the beginning of the emphasis of (ISC)2 security certifications and their importance. [Being emphasized using the face of Albert Einstein]
SECURE – “We Will Rock You”, “We Are The Champions”, “Simply The Best”, and of course a painted scene for the words SECURE followed. This was followed up by a more brief conversation from Rocco L. DeFelice, Executive VP of Securitas Security Services USA Inc. [Being emphasized using the ASIS International Logo and a hidden camera symbol, circuit board, and keypad]
This was followed by some more gratuitous singing and a painted scene representing a scene for Chicago and the Blues Brothers. Of course “Sweet Home Chicago” was played. This was a bit cheesy and long without a lot of meat within the whole thing. We were all then dismissed to attend the opening of the vendor area. Beware: For those interested more in the IT portion of security there really isn’t any vendors with the exception of Cisco that I saw worth looking at. Unfortunately this was dominated by physical security as well.
I am hoping that the sessions are going to highlight the conference. There are a few sessions that I believe will be pretty interesting with a brief synopsis:
Cracking the Cloud Conundrum presented by Andrew Jaquith CTO, SilverSky.
You have a job to do, secure your organization’s infrastructure and communications. This is no small task, but your IT budget is flat or shrinking, security risks are ever-evolving and complexity and regulations are growing. Your employees want to use their own devices at work, and they’re increasingly mobile, making it even more difficult to protect your company’s information. You want to utilize the potential of the cloud, but you are afraid of the security risks, uncertainty, impact on IT, cost of switching and having to answer to your Board about all of it.
Securing Big Data and the Grid presented by Dan Houser from (ISC)2
Big Data is exploding on the landscape in InfoSec, leaving many practitioners wondering how to create a viable yet secure implementation. Standard practice is to segregate, isolate or obfuscate, all of which create diseconomies of scale and erode the very value that can be gained from a big data implementation. A Security Architect and Cloud Architect have delivered multiple secure cloud and big data solution for their organization, and will deliver an interactive case study demonstrating how to lock down big data while still delivering business value and keeping the business happy.
Wednesdays ASIS Keynote Speaker Steve Wosniak – Co-Founder of Apple Computer and Philanthropist
A Silicon Valley icon, Steve Wozniak single-handedly designed the first personal computer and later redirected his lifelong passion for mathematics and electronics toward lighting the fires of excitement for education in grade school students and their teachers. In 1976, Wozniak and Steve Jobs founded Apple Computer, Inc. with Wozniak’s Apple I personal computer. A year later he introduced the Apple II, which was integral in launching the personal computer industry. After leaving Apple in 1985, Wozniak was involved in various business and philanthropic ventures. He currently serves as chief scientist for Fusion-IO and is a published author.
More to Come…
Things have been really busy, but letting everyone know that I have straightened out all the expired DNS registrations and found some breathing room in my current job to start posting again. We will now be read at both www.thevbox.info and www.thevbox.net, so expect some good new content coming out of thevbox website soon.
Think about this for just a second. Conventional or traditional storage was created 20 years prior to virtualization. So as that sinks in, I’ll ask you a question. Why has server provisioning with compute made so many advances in regards to lowered costs, less complexity, and higher performance rates, but storage has relatively remained the same? Even when talking about SSD, the larger players in the storage market are only bolting this on as a cache point and those that are leveraging total flash arrays don’t really address the real problems with storage for virtualized environments. Performance in itself doesn’t mean that you have resolved all issues, “Am I alone here?” … “Can I get an Amen?” Just because a large traditional storage manufacturer purchases a company specializing in conventional flash storage, do you think that they are suddenly resolving their issues regarding how that product interacts with the virtual environment? The answer is no. Maybe if they intended to rewrite their code from the ground up to take specific advantages of that flash storage specifically for virtual environments, then you might be onto something. This, my friends, is exactly what Tintri has done.
Although Tintri was purpose built for virtualized environments period, I will be writing specifically regarding VDI for this post. I am fully certified for both VMware View and Citrix products and my livelihood for the past few years has been centrally focused on VDI performing assessments, plan and design work, and implementations. I have integrated great third-party products such as Trend Micro Deep Security, UniDesk, and Imprivata and with those come an increase in complexity from an architectural standpoint and more specifically a storage standpoint. Let’s look at how one traditional storage provider is carving up storage to meet a specific VDI 500 seat demand. This is straight from their best practices document and is available for anyone to see. On the left is how EMC will carve up your storage into several raid groups, then into LUNS, tiering storage with SSD bolt-on cache (which is expensive BTW). Other storage vendors’ solutions aren’t much better. There are several things wrong with this… let me elaborate on a couple of points here using the 500 seat comparison.
- What happens when you need to advance beyond 500 seats? (What happens to what you have just architected? Back to the well for more spindles? More SSD? Do you have the finances available for that?)
- What happens when you have more than one golden image or use case? (Hint, you only have room for one image in this small 100GB space for a golden image. In VMware View, since a recompose process requires that the replica has to be written before the original is deleted, multiple images will run you out of space. With XenDesktop it doesn’t even make sense.)
- When using a third party product like Unidesk, the CachePoints become extremely important to get the right amount of I/O out of them to drive that performance. In this design there is not enough room in SSD for the cachepoints in the majority of cases.
- Did you have enough I/O built into the original design to accommodate the virtual infrastructure for Citrix XenDesktop or VMware View and all of the VMs? How about for the infrastructure needed for Trend Micro Deep Security? How about the throughput and latency metrics?
- How do you know for certain how many more VMs that you can fit on your current storage before performance is impacted or you are simply out of room?
- With traditional block storage are you getting any deduplication or compression advantages. (I can answer this…as no).
- How about your maximum VMs per LUN when using block storage, have you considered that?
I could go on and on but, here is one more really good question, “What if your storage was aware that VMs were running on it?” (See: VM-Aware)
With the Tintri VMstore there are no RAID groups to worry about, and no LUNS to carve up. Using NFS you can see from the picture on the right how Tintri answers that best practice design for VDI. Some people promise simple, but Tintri really delivers it. There are no cost, complexity or storage performance barriers for VDI anymore which has allowed Tintri customers to realize some ROI when implementing virtual desktops; bringing the VDI storage costs from ~60% of the project down to ~15-20%. Its hyper-density can allow for up to 1000 VMs to be deployed on one single Tintri Storage Appliance (see product specs). [In a server environment you can expect to get 250 – 300 Server VMs on a single Tintri datastore]
Tintri also gives you instant bottleneck visualization, interchangeable datastores, intuitive fuel gauges showing available capacity and performance headroom, VM trend-over-time statistics, VM auto alignment, per-VM snapshots, and more. It wraps a QoS around each VM ensuring performance and virtually eliminates the usual worries surrounding boot storms, AV storms, and login storms pertaining to VDI environments. So my point is, if you can decrease the CapEX and OpEX costs and decrease the complexity or storage while increasing the performance of storage (which is spotlighted by VDI), then what are you waiting for? Give your VDI implementation over to a Tintri VMstore and rest easy that you made a great decision. Some of the best products are the ones which you don’t have to manage and just flat out work (see Data Domain). Isn’t it time that you stop the LUNacy?
Interesting VDI Video:
“Stay thirsty my friends.”
~ The most interesting man in the world.