Cloud Computing

Integrity - An Internal Compass

AUTHORED BY MARK JOHNSON, VICE PRESIDENT, MANAGED SERVICES @ GUIDEIT

 Mark Johnson, Vice President, Managed Services @ GuideIT

Mark Johnson, Vice President, Managed Services @ GuideIT

Not long ago the call went out for a volunteer to write the next installment of our series of GuideIT Values blog entries.  With the topic being “integrity” I quickly said “I’ll take that one”, thinking to myself “hey that’s an easy one to knock out.”  Well, as it turns out, not so much.

New Call-to-action

As I put fingers to keys I started with the predictable list of “challenges to integrity” but soon had to ask myself, how do you write about integrity in a way that doesn’t come across as either sanctimonious or overly simplistic?  And further, how do you translate a critical foundation of character into mere words?

At GuideIT our Founders adopted this approach in an attempt to express what integrity means to us:  “We will hold each other to unquestionable standards of honesty and ethics, in words and actions, and operate with transparency.”

Helpful, but still what does that mean?  If integrity in business meant simply being honest, it’s not a terribly high bar to clear, though isn’t it sad how some don’t?  No, too often we’re faced with opportunities to “pass or fail” an integrity test in far less visible ways, or ways in which there’s not necessarily a clear cut “right” answer.  That’s where the “unquestionable” part comes in.  The standard is clear, the measure remains harder to quantify.  But let’s face it – we all know it when we see it.  So do our fellow team members, and so do our customers.

Can you teach integrity?  I’d say yes and no.  Without question you can use day to day opportunities (and challenges) in business to guide your team members towards what it means to operate in the center of the ethical playing field, whether leading by example yourself, or providing specific guidance about your expectations for ethical behavior as situations arise.  So yes, you can absolutely teach integrity, but only to a point.

No matter how hard you work to establish an environment conducive to both earning and maintaining trust, inherently there is still an element of character that has to come from within, one that if missing will never consistently meet the expectation to operate with “unquestionable standards of honesty.”  To me, that internal compass is called having a conscience, emboldened with the courage to choose the harder right, rather than the easier wrong, even when the decision or the results may not be popular.  There are lots of people who know the right thing to do; at GuideIT we look for the ones actually willing to do it, and hold everyone, including our leaders, to that same standard.

At GuideIT our motto is “Do Technology Right.”  As I reflected on what I initially thought was a simple marketing slogan, my “ah-ha moment” was when I realized it also provides a straight forward approach to operating with integrity.  “Do Technology Right”, absolutely.  But how about, simply do what’s right.

I guess it wasn’t all that hard after all.

New Call-to-action

Individual Accountability, Part of A Whole

AUTHORED BY JOHN LYON, CHIEF OF FINANCE @ GUIDEIT

 John Lyon, Chief of Finance @ GuideIT

John Lyon, Chief of Finance @ GuideIT

The reality of organizational life is never black and white.  More often than not, accountability is muddled and people are not fully aware of the direct connection between their efforts and results.  We tend to keep ourselves from being productive simply by not holding ourselves accountable for our actions.  It is of utmost importance to first hold yourself accountable for your own obligations, commitments, and actions before participating in a team environment. 

Accountability is about improvement.  Improve oneself, and the team will respectively improve. Tom Price nails it when he said, "One person's embarrassment is another person's accountability."  We are all in a leadership role, as all team members are responsible for contributing to the success of the organization.  As leaders, without accountability, an organization would cease to exist.  You not only betray yourself by not owning up to your responsibilities, but your team as well.

The major leagues would never send a player on the field who has consistently missed mandatory practices, for obvious reasons; such an action would diminish the collective hard work of the other team members, and scores would decline rapidly.  The same goes for any type of team. There must be rules and adherence. A pattern toward advancing success.  And that pattern begins with the individual.  

It is up to me and no one else to make sure I am doing what I know I should be doing. When someone has to hold me accountable, because I failed to do what I should have done, I have a serious conversation with myself. My belief is that no one should have to hold me accountable for my actions, responsibilities and goals. While I appreciate others helping me get better, I am the one that must hold myself to a high standard.

I am convinced if you want to advance your life personally or professionally, you must hold yourself accountable for your actions, responsibilities, and goals.  Think about it. Commitment is a choice and a decision that should be made responsibly. Why should it be someone else’s job to make sure you are doing the things that you know you should be doing?


The "Cloud": Data Warehousing in 2015 (Part 2)

AUTHORED BY DONALD C. GILLETTE, PH.D., DATA CONSULTANT @ GUIDEIT

This week we will explore, in my opinion the best BI product currently on the market; Redshift by Amazon Web Services (AWS).  

Amazon Redshift delivers fast query performance by using columnar storage technology to improve I/O efficiency and parallelizing queries across multiple nodes. It uses standard PostgreSQL JDBC and ODBC drivers, allowing you to use a wide range of familiar SQL clients. Data load speed scales linearly with cluster size, with integrations to Amazon S3, Amazon DynamoDB, Amazon Elastic MapReduce, Amazon Kinesis or any SSH-enabled host.

Redshift’s data warehouse architecture allows the user to automate most of the common administrative tasks associated with provisioning, configuring and monitoring a cloud data warehouse. Backups to Amazon S3 are continuous, incremental and automatic. Restores are fast! You are able to start querying in minutes while your data is spooled down in the background. Enabling disaster recovery across regions takes just a few clicks.

Security is built-in. Redshift enables you to encrypt data at rest and in transit (using hardware-accelerated AES-256 and SSL) isolate your clusters using Amazon VPC, and even manage your keys using hardware security modules (HSMs). All API calls, connection attempts, queries and changes to the cluster are logged and auditable.

Redshift uses a variety of innovations to obtain the highest query performance on datasets ranging in size from a hundred gigabytes to a petabyte or more. It uses columnar storage, data compression, and zone maps to reduce the amount of I/O needed to perform queries. It has a massively parallel processing (MPP) data warehouse architecture. Parallelizing and distributing SQL operations, it takes advantage of all available resources. The underlying hardware is designed for high performance data processing, using local attached storage to maximize throughput between the CPUs and drives, and a 10GigE mesh network to maximize throughput between nodes.

With just a few clicks of the AWS Management Console or a simple API call, you can easily change the number or type of nodes in your cloud data warehouse as your performance or capacity needs change. Amazon Redshift enables you to start with as little as a single 160GB DW2 Large node and scale up all the way to a petabyte or more of compressed user data using 16TB DW1 8XLarge nodes. 

While resizing, it places your existing cluster into read-only mode, provisions a new cluster of your chosen size, and then copies data from your old cluster to your new one in parallel. You can continue running queries against your old cluster while the new one is being provisioned. Once your data has been copied to your new cluster, Redshift will automatically redirect queries to your new cluster and remove the old cluster.  

Redshift allows you to choose On-Demand pricing with no up-front costs or long-term commitments, paying only for the resources you provisions. You can obtain significantly discounted rates with Reserved Instance pricing. With affordable pricing that provides options, you’re able to pick the best scenario to meet your needs. 

Stay tuned for part  3 next week. In the meantime, what's your view on Redshift or other tools? Any challenges or projects you want to discuss?

The "Cloud": Data Warehousing in 2015 (Part 1)

AUTHORED BY DONALD C. GILLETTE, PH.D., DATA CONSULTANT @ GUIDEIT

Let’s take a look this week at the benefits of external hosting of our data warehouse.

With prices so affordable from well-known providers like Amazon (AWS), Microsoft (Azure), and Google, there is no business reason to host a data warehouse internally. All three refer to this process by using the phrase “Cloud Computing”.  No offense to the soft and hardware marketing professionals reading this post, but I really think the noun/verb combination “Cloud Computing” is an overloaded phrase. Data warehousing is an object-oriented programming term, and the two cannot be compared.  Not to date myself, but thirty years ago we had a process called “time sharing services”.  This too was available from various vendors.  These services allowed us to run several types of statistical simulations/business analytics.  Cloud Computing is nothing more than what we did thirty years ago, yet on a much larger scale.

Data in a data warehouse is managed in a columnar format based on some kind of key, (unique or non-unique).  This enables analysis to be done in de-normalized rows created from a fact table.  In the world of mainframe days this was called an inverted list.  Today the cost of doing this exact same thing is geometrically lower.

Google, AWS and Azure all offer similar partners providing SAAS in the same variety of business categories, however each has its area of specialization. All leverage their extensive data networks and processing capacities on a worldwide scale.  Years ago I worked at a co-location center in Dallas that was attached to a major telecom provider.  I was installing hardware in a rack one Friday and noticed a large cage and racks being installed next to our location.  Returning the following Monday I found roughly 5,000 servers placed in that new rack space, all humming away.  It was Google installing a regional center for web searching capacity.  I thought that was an enormous economy of scale in 2006. Imagine what it is in 2015!

Google has the edge in web metrics.  Any metric about a web site or usage, user, business, demographic or anything else imaginable about a web site; Google has it remembered.  Not only remembered, but codified and classified as well.    The only drawback from my perspective is that their tools don’t seem user friendly.  Google also has a unique “what if analysis” for digital marketing which neither of the others seem to address.

AWS has created Redshift.  This first-rate product has an excellent architecture built to obtain extremely high query performance on datasets.  These datasets can range from a few hundred gigabytes to a petabyte or more. It uses columnar storage, data compression, and zone maps to reduce the amount of I/O needed to perform queries.  It also uses parallel processing in its data warehouse architecture, parallelizing and distributing SQL operations to take advantage of all available resources.  Costs are minimal, changing frequently depending upon the competition, but pricing is very affordable.  AWS also competes well in the SAAS market and functions parallel to those options provided by the others.

Azure is the newest participant in the threesome of data warehouse providers. Not only are they becoming competitive in the market, they are the logical choice if you are a Microsoft shop.   One large transition issue Microsoft data warehouse shops usually face is in changing from SQL 2008 R2 to SQL 2012 or 2014. Another consideration is that Azure pricing and configuration is somewhat confusing and their customer service tools may add to the confusion.

All of these services are more than capable of solving a data-warehousing requirement.  It’s just a matter of which one meets the needs of your business. In the following weeks I’ll guide you through implementation of a data warehouse in each of the above vendors and provide specific examples and output. 

Patients Demand Technology

AUTHORED BY WENDY DURRE, CUSTOMER EXECUTIVE @ GUIDEIT

 Wendy Durre, Customer Executive @ GuideIT

Wendy Durre, Customer Executive @ GuideIT

What would you think if you walked into a physician’s office and they actually wrote down your appointment in an old-fashioned paper appointment book? Recently my mother called me to say she left a physician’s office because of this very thing.  Although she is in her 70’s and not necessarily tech savvy, it made her uneasy and less confident in that practice.  Why???  If they were using antiquated business practices, how would that effect her patient care and the way they treated her medical issue? 

As a person who works in the technology field, I am accustomed to helping providers implement and optimize technology.  Today’s customer (patient) has a different set of expectations; even those of just 10 years ago.   You don’t have to grow-up using video games to understand that technology is an integral part of the medical field and patient experience.

Recently a study was performed where 97% of the patients surveyed approved of their physician using technology (including desktop and mobile devices) in the exam room.  And, 58% felt that it positively impacts their overall experience, especially when used to educate and explain.  What I find ironic is that technology abounds and always has in the healthcare world; however we often hear that physicians/clinicians etc., are reluctant to adopt new technology despite the fact that their patients welcome it.  

While change is never comfortable, it is definitely necessary.  I predict physicians who choose not to adopt this new tech-savvy avenue will see a dramatic decline in the number of patients they see in their practice.   But, as long as technology doesn’t take away from the interpersonal communication they have with their patients, it will be an asset.  Not only will it improve their physician/patient experience, but their business practices as well.

Now what to do with all of that data?

Make it a great day!


The Touchy-Feely Side of IT

AUTHORED BY WENDY DURRE, CUSTOMER EXECUTIVE @ Guideit

 Wendy Durre, Customer Executive @ GuideIT

Wendy Durre, Customer Executive @ GuideIT

What do you think when you hear the “touchy-feely” side of IT?  Am I referring to a new, softer keyboard, something that works completely in Emoji’s?  Try again!  Believe it or not, TECHNOLOGY impacts our life not only in a practical way, but in an emotional way. 

What I’m saying is **YOU** have an impact on others and the world as an IT professional. If you have a career in IT, whether it be a Service Desk Agent, Project Manager, Developer, Marketer, or Executive Leader, you have experienced the touchy-feely side of IT...and you may not even realize it.

Have you ever thought about how your work impacts others? And how do you feel about your work?  According to a recent study, only 39% of employees believe that the meaningfulness (contribution of their job to society as a whole) of their job is important to overall job satisfaction.  61% are passionate about their work, and 71% say they frequently put all their effort into their work. The takeaway here is that employees who find their work meaningful and fulfilling are more likely to be engaged and do their work well.

Here’s an example.  Does your work assist in the creation of IT jobs or increase employment opportunities in the IT space? Your impact may look something like this: You hire a candidate. That candidate has a family.  That family lives in a home purchased through a realtor who helped them find the best location close to work.  That candidate also works with a team within the company.  That team services the needs of their customer.  That team works on maintaining the new EMR application adopted by a medical practice treating and assessing ER patients.  We are definitely beyond keyboard, servers, and code. 

By digging deeper and evaluating what our job is, we are able to understand that not only are we maintaining systems, we are impacting lives.  Every day as a result of your work, you impact hundreds of people.   It may seem like your job is a small part of a big process, but to those on the receiving end of your efforts, it is huge! 

I challenge you this week to see the scope of your impact on others through your job. I’d love to hear how it changes the way you see yourself in your company and community. So please leave your comments below and make it a great day!

Your Data: No Matter What You Do, It's Your Most Valuable Asset...MINING DATA (2 of 2)

AUTHORED BY DONALD C. GILLETTE, PH.D., DATA CONSULTANT @ GUIDEIT

 Don Gillette, Ph.D., Data Consultant

Don Gillette, Ph.D., Data Consultant

Last week we discussed data mining.  I shared a query using census data; discussing how data mining is of great value in creating Business Intelligence and driving new business.  Today we will explore some great sites for data mining and how to do it.
We begin with one of the largest web service suppliers, Amazon. Amazon maintains the largest collection of remote computing services...unless you ask Google or Microsoft. All provide cloud computing, big data services and mass storage. They also provide API access to large data mining sites. For example, Amazon provides access to many data sets including the following...

  • Climate Data
  • Genome Data 
  • Material Safety Data Sheets
  • Petroleum Public Data Set

Let’s explore a scenario where you work for a Health Insurance company and do Business Analytics. Your Marketing Department asks for your assistance in getting information to price a policy for a company that refines oil and gas. The prospective client provides the Marketing Department with the following information:

  • List of all locations, with employee demographics (age, gender, etc)
  • List of all chemicals used by location
  • List of all products refined at each location
  • Several other relavent pieces of data

Where do you start?  Using your Amazon account, create a repository for the relevant information.  Then, take each data set, and apply it to the project.  For example:

  • Climate Data:  Based on past experience in the Insurance market, I know that weather effects health depending on climate.  The first query I build will create a cube that includes all the facts related to the climate in that location and it’s surrounding areas.
  • Genome Data:  My next cube will explore demographics specifically around gender and age.  By knowing the averages of diseases (Cancer, Heart Disease etc), this can help determine the risks involved in insuring this group.
  • Material Safety Data Sheets & Petroleum Public Data Set:  Combined, I can create a cube that lists the products refined and the chemicals used, as well as any known carcinogens.
  • Additional Options:  For this example, FICO scores are important.  This effects cost and is pretty much a non-negotiable in making quote decisions.

By continuing these steps and combining cubes, I’m able to discover a more complete perspective.  Now when meeting with the Marketing Department, they have a widespread analysis that allows them to determine the most cost effective and comprehensive way to insure this client.  It sounds complicated, and it is.  But, it’s one of the most vital and largest responsibilities of Business Intelligence.

How does your organization use data mining to solve business challenges?

Your Data: No Matter What You Do, It's Your Most Valuable Asset...DATA MINING (1 of 2)

AUTHORED BY DONALD C. GILLETTE, PH.D., DATA CONSULTANT @ GUIDEIT

  Donald C. Gillette, Ph.D., Data Consultant

Donald C. Gillette, Ph.D., Data Consultant

Last weekend I read a very interesting book entitled “The Quants: How a New Breed of Math Whizzes Conquered Wall Street and Nearly Destroyed It” by Scott Patterson. I highly recommend this as a must read for all of you that are doing Business Intelligence and especially Data Mining.

So what is Data Mining? Basically it is the practice of examining large databases in order to generate new information. Ok, let’s dig into that to understand some business value.  

Let us consider the US Census. Of course by law, it is done every ten years and produces petabytes (1 petabyte is one quadrillion bytes of data), which are crammed full of facts that are important to almost anyone that is doing data mining for almost any consumer based product, service, etc. Quick sidebar and promo…in part 2 of this micro series, I will share where databases like the census and others can be accessed to help make your data mining exercise valuable. 

So if I was asked by the marketing department to help them predict how much to spend on a new advertising campaign to sell a new health care product that enhances existing dental benefits of those already in qualified dental plans, I would have a need for data mining. With this criteria, I would, for example, query the average commute time of people over 16 in the state of Texas. It is 25 minutes. We would now have a cornerstone insight to work from. This of course narrows the age group to those receiving incomes and not on Social Security and Medicare. In an effort to validate a possible conclusion, we run a secondary query on additional demographic criteria and learn that a 25 minute commute volume count doesn’t change. Yet we learn that 35% of the people belong to one particular minority segment.

I pass this information to the Marketing Department and they now have the basis to understand how much they should pay for a statewide marketing campaign to promote their new product, when to run the campaign, and what channels and platforms to use.

DATA MINING, can’t live without it. Next week we’ll cover how and where to mine. 

Your Data: No Matter What You Do, It's Your Most Valuable Asset (Part 2 of 2)

AUTHORED BY DONALD C. GILLETTE, PH.D., DATA CONSULTANT @ GUIDEIT

 Donald C. Gillette, Ph.D., Data Consultant 

Donald C. Gillette, Ph.D., Data Consultant 

Last week we declared, “If you don’t embrace the fact that your business’ greatest asset is your data, not what you manufacture, sell or any other revenue-generating exercise, you will not exist in five years. That’s right…five years”.

This week, I’m introducing a perspective on leveraging Big Data to create tangible asset value. In the world of Big Data, structure is undefined and management tools vary greatly across both open source and proprietary…each requiring a set of skills unique from the world of relational or hierarchical data. To appreciate the sheer mass of the word “big”, we are talking about daily feeds of 45 terabytes a day from some social media sites. Some of the users of this data have nick names like “Quants” and they use tools called Hadoop, MapReduce, GridGain, HPCC and Storm. It’s a crazy scene out there!

Ok, so the world of big data is a crazy scene. How do we dig in and extract value from it?  In working with a customer recently, we set an objective to leverage Big Data to help launch a new consumer product. In the old days, we would assemble a survey team, form a focus group and make decisions based on a very small sample of opinions…hoping to launch the product with success. Today we access, analyze, and filter multiple data sources on people, geography, and buying patterns to understand the highest probability store locations for a successful launch. All these data sources exist in various electronic formats today and are available through delivery sources like Amazon Web Services (AWS) and others. 

In our case, after processing one petabyte (1000 terabytes) of data we enabled the following business decisions…

  • Focused our target launch areas to five zip codes where families have an average age of children from two to four years old with a good saturation of grocery stores and an above average median income
  • Initiated a marketing campaign including social media centered on moms, TV media centered on cartoon shows
  • Offered product placement incentives for stores focusing on the right shelf placement for moms and children.

 
While moms are the buyers, children are influencers when in the store. In this case, for this product, lower shelves showed a higher purchasing probability because of visibility for children to make the connection to the advertising and “help” mom make the decision to buy.  

Conclusion? The dataset is now archived as a case study and the team is repeating this exercise in other regional geographic areas. Sales can now be compared between areas enabling more prudent and valuable business decisions. Leveraging Big Data delivered asset value by increasing profitability, not based on the product but rather on the use of data about the product. What stories can you share about leveraging Big Data? Post them or ask questions in the comments section.

Your Data: No Matter What You Do, It's Your Most Valuable Asset (Part 1)

Authored by Donald C. Gillette, Ph.D., Data Consultant @ GuideIT

 Donald C. Gillette, Ph.D, Data Consultant

Donald C. Gillette, Ph.D, Data Consultant

If you don’t embrace the fact that your business’ greatest asset is your data, not what you manufacture, sell or any other revenue-generating exercise, you will not exist in five years.  That’s right…five years.

Not so sure that’s true? Ask entertainment giant Caesars Entertainment Corp. their perspective. They recently filed Chapter 11 and have learned that their data is what creditors value. (Wall Street Journal, March 19, 2015, Prize in Caesars Fight: Data on Players. Customer loyalty program is valued at $1 billion by creditors). The data intelligence of their customers is worth more than any of their other assets including Real Estate. 

Before working to prove this seemingly bold statement, let’s take a look back to capture some much needed perspective about data.

The Mainframe

Space and resources were expensive and systems were designed and implemented by professionals who had a good knowledge of the enterprise and its needs.  Additionally, very structured process(s) existed to develop systems and information. All this investment and structure was often considered a bottleneck and an impediment to progress.  Critical information such as a customer file, or purchasing history, was stored in a single, protected location. Mainframe Business Intelligence offerings were report-writing tools like the Mark IV. Programmers and some business users were able to pull basic reports.  However, very little data delivered intelligence like customer buying habits.

Enter the Spreadsheet

With the introduction of the PC, Lotus 123 soon arrived in the market.  We finally had a tool that could represent data in a two dimensional (2D) format enabling the connection of valuable data to static business information. Some actionable information was developed resulting in better business decisions. This opened up a whole new world to what we now call business intelligence. Yet, connecting the right data points was a cumbersome, manual process. Windows entered the scene and with it, the market shifted from Lotus to Excel carrying over similar functionality and challenges.

Client Server World Emerges

As client servers emerged in the marketplace, data was much more accessible. It was also easier to connect together, relative to the past, providing stakeholders real business intelligence and its value to the enterprise. With tools like Cognos, Teradata, and Netezza in play, data moved from 2D to 3D presentation. Microsoft also entered the marketplace with SQL Server. All this change actually flipped the challenges of the Mainframe era.  Instead of bottlenecked data that’s hard to retrieve, version creep had entered the fold…multiple versions of similar information in multiple locations. What’s the source of truth?

Tune in next week as we provide support for data being your most valuable asset with a perspective and case study analysis of a Business Intelligence model that uses all technology platforms and delivers the results to your smartphone.   

IT Trends, Change and The Future...A Conversation With an Industry Veteran

 Mark Johnson, VP Managed Services

Mark Johnson, VP Managed Services

As a technology and healthcare centric marketing firm, we at illumeture work with emerging companies in achieving more right conversations with right people. Part of that work comes in learning and sharing the thought leadership and subject matter expertise of our clients with the right audiences. Mark Johnson is Vice President with GuideIT responsible for Account Operations and Delivery.  Prior to joining GuideIT, Mark spent 23 years with Perot Systems and Dell, the last 6 years leading business development teams tasked with solutioning, negotiating and closing large healthcare IT services contracts.  We sat down with Mark for his perspective on what CIOs should be thinking about today. 

Q:  You believe that a number of fundamental changes are affecting how CIOs should be thinking about both how they consume and deliver IT services – can you explain?

A:  Sure.  At a high level, start with the growing shift from sole-source IT services providers to more of a multi-sourcing model.  A model in which CIOs ensure they have the flexibility to choose among a variety of application and services providers, while maintaining the ability to retain those functions that make sense for a strategic or financial reason.  The old sourcing model was often binary, you either retained the service or gave it to your IT outsourcing vendor.  Today’s environment demands a third option:  the multi-source approach, or what we at GuideIT call “Flex-Sourcing”.

Q:  What’s driving that demand?

A:  A number of trends, some of which are industry specific.  But two that cross all industries are the proliferation of Software as a Service in the market, and cloud computing moving from infancy to adolescence.

Q:  Software as a Service isn’t new.

A:  No it isn’t.  But we’re moving from early adopters like salesforce.com to an environment where new application providers are developing exclusively for the cloud, and existing providers are executing to a roadmap to get there.  And not just business applications; hosted PBX is a great example of what used to be local infrastructure moving to a SaaS model in the cloud.  Our service desk telephony is hosted by one of our partners – OneSource, and we’re working closely with them to bring hosted PBX to our customers.  E-mail is another great example.  In the past I’d tee up email as a service to customers, usually either Gmail or Office365, but rarely got traction.  Now you see organizations looking hard at either a 100% SaaS approach for email, or in the case of Exchange, a hybrid model where organizations classify their users, with less frequent users in the cloud, and super-users hosted locally.  GuideIT uses Office365 exclusively, yet I still have thick-client Outlook on my PC and the OWA application on both my iPhone and Windows tablet.  That wasn’t the case not all that long ago and I think we take that for granted.

Q:  And you think cloud computing is growing up?

A:  Well it’s still in grade school, but yes, absolutely.  Let’s look at what’s happened in just a few short years, specifically with market leaders such as Amazon, Microsoft and Google.  We’ve gone from an environment of apprehension, with organizations often limiting use of these services for development and test environments, to leading application vendors running mission critical applications in the cloud, and being comfortable with both the performance/availability and the security of those environments.  On top of that, these industry leaders are, if you’ll excuse the comparison, literally at war with each other to drive down cost, directly benefiting their customers.  We’re a good ways away from a large organization being able to run 100% in the cloud, but the shift is on.  CIOs have to ensure they are challenging the legacy model and positioning their organizations to benefit from both the performance and flexibility of these environments, but just as importantly the cost. 

Q:  How do they do that?

A:  A good place to start is an end to end review of their infrastructure and application strategy to produce a roadmap that positions their organization to ride this wave, not be left behind carrying the burden of legacy investments.  Timing is critical; the pace of change in IT today is far more rapid than the old mainframe or client-server days and this process takes planning.  That said, this analysis should not be just about a multi-year road-map.  The right partner should be able to make recommendations around tactical initiatives, the so-called “low-hanging fruit” that will generate immediate cost savings, and help fund your future initiatives.  Second, is to be darn sure you don’t lock yourself into long-term contracts with hosting providers, or if you do ensure you retain contractual flexibility that goes well beyond contract bench-marking.  You have to protect yourself from the contracting model where vendors present your pricing in an “as a service” model, but are really just depreciating capital purchased on your behalf in the background.  You might meet your short-term financial objectives, but I promise in short order you’ll realize you left money on the table.  At Guide IT we’re so confident in what we can deliver that if a CIO engages GuideIT for an enterprise assessment, and isn’t happy with the results, they don’t pay.

Q:  You’ve spent half your career in healthcare – how do you see these trends you’ve discussed affecting the continuity of care model?

A:  Well we could chat about just that topic for quite some time.  My “ah-ha moments” tend to come from personal experience.  I’ll give you two examples.  Recently I started wearing a FitBit that syncs with my iPhone.  On a good day, the device validates my daily physical activity; but to be honest, too often reminds me that I need to do a better job of making exercise a mandatory part of my day.  Today that data is only on my smartphone – tomorrow it could be with my family physician, in my PHR, or even with my insurer to validate wellness premium discounts.  The “internet of things” is here and you just know these activity devices are the tip of the iceberg.  Your infrastructure and strategy roadmap have to be flexible enough to meet today’s requirements, but also support what we all know is coming, and in many cases what we don’t know is coming.  Today’s environment reminds me of the early thin client days that placed a premium on adopting a services-oriented architecture.

Second is my experience with the DNA sequencing service 23andme.com.  I found my health and ancestry data fascinating, and though the FDA has temporarily shut down the health data portion of the service, there will come a day very soon that we’ll view the practice of medicine without genome data as akin to the days without antibiotics and MRIs.  Just as they are doing with the EMR Adoption Model, CIOs should ask themselves where they’re at on the Healthcare Analytics Adoption Model and what their plan is to move to the advanced stages - the ones beyond reimbursement.  A customer of mine remarked the other day that what’s critical about the approach to analytics is not “what is the answer?” but rather “what is the question?”  And he’s right.