Case Snapshots

See how GuideIT has helped companies achieve their business goals.


Slider
No post found
No post found
No post found
No post found
Your Data: No Matter What You Do, It’s Your Most Valuable Asset…DATA MINING (1 of 2)

AUTHORED BY DONALD C. GILLETTE, PH.D., DATA CONSULTANT @ GUIDEIT

Last weekend I read a very interesting book entitled “The Quants: How a New Breed of Math Whizzes Conquered Wall Street and Nearly Destroyed It” by Scott Patterson. I highly recommend this as a must read for all of you that are doing Business Intelligence and especially Data Mining.

So what is Data Mining? Basically it is the practice of examining large databases in order to generate new information. Ok, let’s dig into that to understand some business value.

Let us consider the US Census. Of course by law, it is done every ten years and produces petabytes (1 petabyte is one quadrillion bytes of data), which are crammed full of facts that are important to almost anyone that is doing data mining for almost any consumer based product, service, etc. Quick sidebar and promo…in part 2 of this micro series, I will share where databases like the census and others can be accessed to help make your data mining exercise valuable.

So if I was asked by the marketing department to help them predict how much to spend on a new advertising campaign to sell a new health care product that enhances existing dental benefits of those already in qualified dental plans, I would have a need for data mining. With this criteria, I would, for example, query the average commute time of people over 16 in the state of Texas. It is 25 minutes. We would now have a cornerstone insight to work from. This of course narrows the age group to those receiving incomes and not on Social Security and Medicare. In an effort to validate a possible conclusion, we run a secondary query on additional demographic criteria and learn that a 25 minute commute volume count doesn’t change. Yet we learn that 35% of the people belong to one particular minority segment.

I pass this information to the Marketing Department and they now have the basis to understand how much they should pay for a statewide marketing campaign to promote their new product, when to run the campaign, and what channels and platforms to use.

DATA MINING, can’t live without it. Next week we’ll cover how and where to mine.

Servant Leadership…How to Build Trust in The Midst of Turmoil

AUTHORED BY Ron hill, Vice president, sales @ GUIDEIT

It was a sunny winter day and I had just started as the Client Executive at one of the largest accounts in the company. Little did I know, clouds were about to roll in. The CIO walked into my office and sat down with a big sigh. She communicated that they were ending our agreement and moving to a different service provider. We had 12 months. It required immediate action by our company, implications in the market would ensue, and an environment of uncertainty was born for our team of more than 700 people providing service support.

This was no time to defend or accept defeat. We had to act. Our account leadership team readied the organization for the work ahead and imminent loss. We formally announced the situation to the organization. There were tears and some were even distraught. Our leadership team had not faced this situation before. The next 12 months looked daunting.

Regardless, it was time to lead. We created a “save” strategy and stepped into action beginning with daily team meetings. We invested time prioritizing and sharing action items and implications about information systems, project management, and the business process services. It was our job to operate with excellence, despite the past. It was our job to honorably communicate knowledge to the incoming service provider. One of the outcomes of our work was a weekly email outlining past week accomplishments and expectations for the next week. The email often included a blend of personal stories and team success. We even came up with a catchy brand for the email…Truth of the Matter. It turned out to be a key vehicle that kept our teams bonded and informed. Our leadership team used it as a vehicle to help maintain trust with the team.

During our work, we also began to rebuild trust with the customer as we continued to support them in all phases of their operation. Because of our leadership team’s commitment to service, transparency, and integrity, the delivery team was inspired in achieving many great milestones during those 12 months. We were instrumental in helping our customer achieve multiple business awards including a US News and World Report top ranking. We also found ways to achieve goals that established new trends in their industry. Before we knew it, the year had come and gone and we were still there.

Reflecting back, since that dark day when the CIO informed me that we were done, it was actually the beginning of more than a decade-long relationship. The team had accomplished an improbable feat. In the end, it was the focus of our leadership to come together with a single message and act with transparency…letting their guard down to build an environment of trust with the team and with the customer. This enabled all of us to focus on meeting the goals of the customer, together.

Your Data: No Matter What You Do, It’s Your Most Valuable Asset (Part 2 of 2)

AUTHORED BY DONALD C. GILLETTE, PH.D., DATA CONSULTANT @ GUIDEIT

Last week we declared, “If you don’t embrace the fact that your business’ greatest asset is your data, not what you manufacture, sell or any other revenue-generating exercise, you will not exist in five years. That’s right…five years”.

This week, I’m introducing a perspective on leveraging Big Data to create tangible asset value. In the world of Big Data, structure is undefined and management tools vary greatly across both open source and proprietary…each requiring a set of skills unique from the world of relational or hierarchical data. To appreciate the sheer mass of the word “big”, we are talking about daily feeds of 45 terabytes a day from some social media sites. Some of the users of this data have nick names like “Quants” and they use tools called Hadoop, MapReduce, GridGain, HPCC and Storm. It’s a crazy scene out there!

Ok, so the world of big data is a crazy scene. How do we dig in and extract value from it?  In working with a customer recently, we set an objective to leverage Big Data to help launch a new consumer product. In the old days, we would assemble a survey team, form a focus group and make decisions based on a very small sample of opinions…hoping to launch the product with success. Today we access, analyze, and filter multiple data sources on people, geography, and buying patterns to understand the highest probability store locations for a successful launch. All these data sources exist in various electronic formats today and are available through delivery sources like Amazon Web Services (AWS) and others.

In our case, after processing one petabyte (1000 terabytes) of data we enabled the following business decisions…

  • Focused our target launch areas to five zip codes where families have an average age of children from two to four years old with a good saturation of grocery stores and an above average median income
  • Initiated a marketing campaign including social media centered on moms, TV media centered on cartoon shows
  • Offered product placement incentives for stores focusing on the right shelf placement for moms and children.

While moms are the buyers, children are influencers when in the store. In this case, for this product, lower shelves showed a higher purchasing probability because of visibility for children to make the connection to the advertising and “help” mom make the decision to buy.

Conclusion? The dataset is now archived as a case study and the team is repeating this exercise in other regional geographic areas. Sales can now be compared between areas enabling more prudent and valuable business decisions. Leveraging Big Data delivered asset value by increasing profitability, not based on the product but rather on the use of data about the product. What stories can you share about leveraging Big Data? Post them or ask questions in the comments section.

Your Data: No Matter What You Do, It’s Your Most Valuable Asset (Part 1)

Authored by Donald C. Gillette, Ph.D., Data Consultant @ GuideIT

If you don’t embrace the fact that your business’ greatest asset is your data, not what you manufacture, sell or any other revenue-generating exercise, you will not exist in five years.  That’s right…five years.

Not so sure that’s true? Ask entertainment giant Caesars Entertainment Corp. their perspective. They recently filed Chapter 11 and have learned that their data is what creditors value. (Wall Street Journal, March 19, 2015, Prize in Caesars Fight: Data on Players. Customer loyalty program is valued at $1 billion by creditors). The data intelligence of their customers is worth more than any of their other assets including Real Estate.

Before working to prove this seemingly bold statement, let’s take a look back to capture some much needed perspective about data.

The Mainframe

Space and resources were expensive and systems were designed and implemented by professionals who had a good knowledge of the enterprise and its needs.  Additionally, very structured process(s) existed to develop systems and information. All this investment and structure was often considered a bottleneck and an impediment to progress.  Critical information such as a customer file, or purchasing history, was stored in a single, protected location. Mainframe Business Intelligence offerings were report-writing tools like the Mark IV. Programmers and some business users were able to pull basic reports.  However, very little data delivered intelligence like customer buying habits.

Enter the Spreadsheet

With the introduction of the PC, Lotus 123 soon arrived in the market.  We finally had a tool that could represent data in a two dimensional (2D) format enabling the connection of valuable data to static business information. Some actionable information was developed resulting in better business decisions. This opened up a whole new world to what we now call business intelligence. Yet, connecting the right data points was a cumbersome, manual process. Windows entered the scene and with it, the market shifted from Lotus to Excel carrying over similar functionality and challenges.

Client Server World Emerges

As client servers emerged in the marketplace, data was much more accessible. It was also easier to connect together, relative to the past, providing stakeholders real business intelligence and its value to the enterprise. With tools like Cognos, Teradata, and Netezza in play, data moved from 2D to 3D presentation. Microsoft also entered the marketplace with SQL Server. All this change actually flipped the challenges of the Mainframe era.  Instead of bottlenecked data that’s hard to retrieve, version creep had entered the fold…multiple versions of similar information in multiple locations. What’s the source of truth?

Tune in next week as we provide support for data being your most valuable asset with a perspective and case study analysis of a Business Intelligence model that uses all technology platforms and delivers the results to your smartphone.

Reduce IT Spending… Approach Rationalization The Right Way

AUTHORED BY Frank T. avignone, IV Transformation executive @ GUIDEIT

Meaningful Use, Health Information Exchange, and Predictive Analytics are a few phrases that keep hospital CFOs awake at night. As the hospital market prepares for another shift in reimbursement, including a 1.3% cut in reimbursement for 2015 Medicare and an additional 75% cut in DSH payments by 2019, the health system CFO has innumerable financial challenges in maintaining a healthy balance sheet. Add to these concerns the looming ICD-10 transition expense, consolidation (including the aggregation of physicians and post acute care providers), and the future is daunting for the chief financial officer and other executive stakeholders.

There is a bright spot for the health system CFO with respect to bringing sanity to the healthcare IT spend on the balance sheet. It just requires a little courage. The majority of US health systems maintain an IT portfolio that supports redundant functions across the enterprise. In a consolidation environment where M&A activity is increasing, this can result in $70,000-100,000 per bed to integrate disparate clinical and business systems. A simple effort of technology portfolio rationalization can reduce IT spend in any environment as much as 60% capex and 30% opex. The effectiveness of application portfolio rationalization and the impact on the health system, in terms of cost savings, revenue generation, and meeting the needs of clinical business users, depends on the right approach.

While traditional application rationalization projects will yield positive, quantifiable results, typically they do not take into account “Information Rationalization” that will negatively impact value and time to care delivery. The most important aspect of an application portfolio is not the application itself, but rather the information trapped within the application stack. Changing perspective will increase the value of any rationalization effort. Releasing the information contained within legacy applications is the critical focus. Organizations can accomplish this by leveraging an enterprise service bus to overlay the information rich interface engine architecture leveraging existing information without tired approach of "rip and replace" usually offered by software and IT vendors. Once information is captured within the enterprise bus, it can be analyzed and consolidated into events and used as real-time streaming information to better understand the real value of the data and it’s origins. While rationalization capex/opex cost reductions are the underlying principals of the APR effort, the health system CFO and CIO can work together to create additional value. Simply by releasing the information, and in some cases virtualizing their associated application’s logic, the health care enterprise can preserve the value and improve access to the information trapped within. This approach will allow for rationalization discovered by traditional disciplines and provide a single uniform source of information and infrastructure to rapidly enable new business solutions dynamically and rapidly.

The time has come for the health system CFO and CIO to work hand in hand to accurately understand and align business needs with an agile information technology stack that promotes boundary-less access to information independent of the siloes of applications, securely and dependably.

Service Desk Selection: 3 Checkpoints

AUTHORED BY SCOTT TEEL, MANAGED SERVICES EXECUTIVE @ GUIDEIT

Today’s Service Desk continues to evolve with the technology that it supports for the individual end user community. Granted, it begins with a single seat and phone. From phone calls to email, to self-service customer web portals, chat, and social media…the ways in which we engage help has changed and scaled dramatically.

All sources of customer engagement must be tracked and reported in a single ticketing system to ensure quality of service through measurable analysis of performance. And a strong value proposition is a must. As you or someone in your organization considers that value proposition, here are 3 checkpoints for selecting a Service Desk solution:

  1. Partnership. Service Desk capabilities are often labeled a commodity offering due offshore capabilities. All providers of these services are battling and driving for the lowest cost solution without ‘listening and understanding’ individual customer requirements. If treated like a commodity, in most cases, the service becomes a bad investment. The right partner will assist in offering the right solution by listening and understanding the demands and risks of your needs. Then they can apply the right automation, tools and utilities to make service flourish and mitigate risk.
  2. Pricing. Yes there are many variables to drive costs up or down a Service Desk offering…from onshore to offshore, languages, first call resolution, ticketing, tool types, reporting, IT, application support, and so on. Regardless, service providers want to fill their excess capacity. Invest the time to understand their situation. Asking the right questions about their capabilities and willingness to be flexible (and ability to execute within such flexibility using a defined methodology), you can find great value through negotiating the right balance of service and pricing.
  3. Metrics. You must ensure that your partner has the available tools to establish a baseline for delivery for this service, while following ITIL processes that enable Continual Service Improvements (CSI) throughout the relationship. The right tools include the availability and performance of the PBX / ACD system, the ticketing system and any additional automated processes to show the CSI. The right reporting is available weekly, monthly and must be meaningfully measurable.

In summary, evaluate your options and ask a lot of questions about their situation. You will develop the leverage you need to achieve the right service with maximum value. Approach your evaluation this way and you will increase the probability of partnering with a group that serves as an extension of your team.

Balancing Creativity and Efficiency in IT Service Management (ITSM) Environments: 3 Best Practices

AUTHORED BY Scott teel, Managed services EXECUTIVE @ GUIDEIT

Although many IT service managers enjoy the thrill of a good chase (identifying the problem, developing possible solutions, and then testing those theories) leveraging such creativity of those outside-the-box thinkers can be a challenge.  Most engineers and administrators base their problem solving on their own set of experiences and training.  While this is part of the reason you hire them, it can sometimes limit their problem solving efficiency and overall performance when measured against the objectives of the business.  While some IT problems may be easily identified and solved, others require a much more “detective-like” approach and more creativity. So how does a leader balance creativity and efficiency in an ITSM environment?

Here are 3 best practices to ensure problem solving remains streamlined while still fostering creativity…

  1. Collaborate.  Infrastructure problems are complex and can span a multitude of functional areas. One size does not fit all solutions is not the normal in IT today and most solutions today can coexist or integrate into the foundation of your ITSM solution. So, foster a proactive organized collaboration environment to enable open sharing across domain expertise.
  2. Speak the same language and keep it simple.  Problems should be solved with a balance of tactical and strategic insight. Ensure the final solution is solved by taking small, easy to understand steps and milestones that achieve the overall business goals with measured results. Make sure your IT specialists are on the same page by providing a clear understanding of the problem, possible causes, and possible outcomes.
  3. Bring in help if needed.  Sometimes the right answer will come from outside your group. Don’t be afraid to consider this option.

Creativity can be balanced with efficiency by fostering an environment where ideas and solutions can be freely shared with an organized and collaborative approach. Join us next week for our next microblog post!

Fedora 20: Firefox Reports Flash as Vulnerable

This problem starts with Firefox reporting that your flash-plugin is out of date.  This report looks like this and disables all-flash.

After this, we will take a look Mozilla’s Plugin Check to see what it thinks is going on.

Now here we can see that version 11.2.202.440 is vulnerable.  We will then check about:plugins to see if it agrees.

Again this is also reporting 11.2.202.440, so there must be a problem, but it also tells us that there is an update available.  Now I run regular yum updates on this machine, and I actually noticed flash-plugin was updated just a few hours prior to seeing this alert.  So lets check the installed version.

[root@ltmmattoon matthew]# yum info flash-plugin<br />

Loaded plugins: langpacks, refresh-packagekit<br />

Installed Packages<br />

Name : flash-plugin<br />

Arch : x86_64<br />

Version : 11.2.202.442<br />

Release : release<br />

Size : 19 M<br />

Repo : installed<br />

From repo : adobe-flashplayer<br />

Summary : Adobe Flash Player 11.2<br />

URL : http://www.adobe.com/downloads/<br />

License : Commercial<br />

Description : Adobe Flash Plugin 11.2.202.442<br />

: Fully Supported: Mozilla SeaMonkey 1.0+, Firefox 1.5+, Mozilla<br />

: 1.7.13+

Interesting 11.2.202.442, which is higher than what Firefox is reporting.  Of course Firefox has been rebooted, but lets do it again just to make sure.

Now to fix it.

$ pwd<br />

/home/matthew/.mozilla/firefox/cls7wbvm.default<br />

$ mv pluginreg.dat pluginreg.dat.bak

Restart Firefox and it will collect new data on all of its plugins, and about:plugins will start reporting the correct version.

IT Project Management…Which Stakeholder Are You?

Authored by Guy Wolf, transformation executive @ guideit

So much material has already been developed and published about what a PMO is, what it can be, and how to set one up.  Much of the material is banal. For those of you who are fans of Monty Python, the “How to do it” skit comes to mind. This particular post focuses on something else: a perspective on stakeholder roles and the importance of clear objectives.

Often PMOs get started for the wrong reasons, putting a solution in place before fully understanding the primary objective. Some promote focusing on achieving a level of maturity first. Others propose starting at the project level, and as you demonstrate proficiency, moving “up” to the program, then portfolio level.  The problem with these approaches is that the “what” is confused for the “how.”

The best practice for an effective PMO is to develop a list of business objectives and customers that will be served with a business case that illustrates why implementing a PMO is better than the alternatives. The PMO, however one defines it, is not a project.  It is a business unit.  Therefore, just like Human Resources, Marketing, or Facilities, it must justify its existence by improving the lives of its customers.  What that means in your situation, and how to go about it, will be different from others. Below are some perspectives by role.

Customer/CIO:  Nearly all business improvement initiatives have a large component of Information Technology (IT) at their core. Frequently, IT is the single largest component, and implementation is often on the critical path to achieving the desired end state. Additionally, IT departments often suffer from a practice of project management that excludes all other departments in an enterprise.  This disconnect can create a misalignment in critical path objectives. Unfortunately the CIO too often holds the bag at the end if the broader strategy and governance are not easily accessible. What the CIO needs is clear governance or a seat at the strategy table to manage a complex, inter-related portfolio of initiatives that will deliver success to the company.

CFO: CFO’s have an expectation to forecast and manage capital and operating expenses.  As enterprise business-change initiatives often carry high risk, a CFO has a strong desire to assure that processes are in place to alert leadership in advance of potential variances and manage expenses to the forecasted budget, even if it was set long before the project requirements were fully known.

CEO: charged with the overall success of the organization, the CEO must manage many competing priorities among multiple departments. Managing a global perspective includes oversight of limited availability in capital investment resources spread across multiple strategic priorities.  To that end, CEO’s require some method to weigh the various investment options and to select the combination that has the highest chance of achieving the overall organizational objectives.

Business Unit Leaders (Sponsors):  charged with growing and improving their areas of responsibility. They have a need for a well-defined process to engage IT resources in helping them prioritize projects and source them with the right resources. Furthermore, they need visibility to relevant status reporting with opportunity to make business decisions to navigate a successful result.

Steering Committee: responsible for weighing the costs, risks and benefits of multiple project options, often without certainty of the inputs.  They require a method that provides as much information as possible regarding objectives, resources, and stakeholders.  For projects underway, visibility to insights through reporting enables better decision-making throughout the process.

Project Managers: need support for collecting status data enabling focus on day-day decision making and management, not task-driven administration; access to resources across multiple matrixed towers in the organization; access to key stakeholders to make decisions and allow them to keep projects on track.

Team members: require easy data collection that helps reporting status and doesn't take a lot of time to use; respect for a balance of time to support operations as well as project demands from multiple project manager stakeholders.

Choosing objectives means limiting some, and eliminating others. Prioritization isn’t easy but it’s necessary to increase the probability of extending the long-term value of your projects. There are some great templates that can be used in building and operating a PMO to improve the quality and speed with which we achieve our goals. If you would like more information, drop a comment or email me at guy.wolf@guideit.com. I welcome your feedback, as we strive to do technology right, and do projects right.

BlackBerry Z30: No Update to 10.3.1

I have a BlackBerry Z30 (STA100-5) which I was excited to update to the latest release of BB10, which was announced on February 9, 2015 (link).  However, when I was attempting to install the update over the air, it kept telling me that I was already on the latest version.  This was obviously incorrect (I was on 10.2.1.3062 which is the latest prior to 10.3.1).

Here are the things that I tried that were unsuccessful.

  • Reboots (including power off).
  • Removing the SIM and using wifi only.
  • Waiting.

Now eventually I was able to get the update installed on the advice of a friend who already had the update.

  1. Turn of Mobile Network.
  2. Power Off.
  3. Remove SIM.
  4. Power On.
  5. Check for update.

Now at this point, I had something much different, it took significantly longer to check for the updates, which of course got me excited thinking it must actually be doing something.  Twenty minutes later I realized I must have been wrong, and killed the Settings app. Then checked for the update again, and it immediately found it and I was able to start the install.  Once I had the update and it was in the process of downloading I re-inserted my SIM and enabled mobile networking.

Obviously there is the possibility for streamlining this procedure (do you actually need to disable mobile networking and remove the SIM being the most obvious one), but since I didn’t have a box full of these devices with this problem I was unable to optimize the procedure, so feel free to tinker, but if nothing seems to work then feel free to give the above a go and see if you have the same experience.

Also important to note, I purchased my BlackBerry directly from the BlackBerry Store, if you purchased it from a carrier then you might have different mileage based on their approvals.

Physicians, Clinicians: Thank You

Authored by Mark Johnson, VP Managed Services @ GuideIT

For anyone who has spent the bulk of their career in healthcare IT, a venture into an in/out-patient setting for one’s own health is always an interesting experience.  Throughout the process you can’t help but say – “it’s 2015 and we’re still doing this?”  For me it was in preparation for that first (dreaded) “over 50 procedure”.  It started with far too much paperwork, some of it redundant, and some of it collecting information I had already provided in their portal (sadly with no linkage to my HealthVault account).  Then I arrived in the clinic and was not only faced with more paperwork, but music that was playing way too loud on a morning that I was already grumpy from not being able to eat the day prior.

But then, everything changed.  Once I left the waiting room, every clinician I interacted with was simply outstanding.  From the prep nurse, to the anesthesiologist, to the doctor himself.  They actually seemed to really and truly enjoy their work!  And their positive approach to delivery of care translated directly to an extremely positive patient-clinician interaction.

So while there’s plenty of time to talk about how to better leverage IT in the delivery of care, for me today this is simply a “hat’s off and well done” to the people that really make such a tremendous difference in our lives – clinicians and their staff.
Oh, and if you’re wondering – it turns out it was a very good thing I had this taken care of.  So listen to your physician.

MutliSourcing…The Right IT Governance for Maximizing Business Outcomes

Authored by Jeff Smith, VP Business Development @ GuideIT

A national healthcare provider was ready to move from multiple PBX systems to a VOIP-centric model for their communications…the transition, one piece of a broader multi-source IT strategy. Simple enough, right? Not exactly. This transition was a monster…500 locations and more than 1100 buildings. Additionally, the provider cares for patients, the majority of whom are in some form of acute need. Sure, any business requires clean execution in a project of this magnitude. But few businesses have the sole mission of caring for the acute health needs of their customers like healthcare providers do for their patients.

Truly lots of moving parts in this story…a story representing one part of the bigger picture. A critical attribute of this provider’s success was ensuring the right IT Governance function encompassing their multi-source strategy.

So what is the right governance? According to Gartner, governance is the decision framework and process by which enterprises make investment decisions and drive business value. Take that one step further applying IT and the definition is, “IT Governance (ITG) is the processes that ensure the effective and efficient use of IT in enabling an organization to achieve its goals. IT demand governance (ITDG—what IT should work on) is the process by which organizations ensure the effective evaluation, selection, prioritization, and funding of competing IT investments; oversee their implementation; and extract business benefits.”

Now consider “why” the right IT Governance is critical in a multi-sourcing environment. When multiple vendor partners serve in support of the broader business mission, the opportunity to optimize outcomes for the business is huge. And so is the risk. The opportunity is there because the organization can leverage the specialization of subject matter experts necessary in a highly complex IT environment driven by growing business demands. One partner specializes in apps, another in cloud infrastructure, another in mobility, and so on. They all bring optimal value in areas critical to support the business…thus the core value of multi-sourcing.

Therein lies the risk too. Without the right governance model, no clear accountability exists to ensure open collaboration and visibility across specialists. Specialists will act in silos. And we all know how silos hurt business. Simply put, the “why” for the right governance is to optimize outcomes through maximizing specialization while minimizing the risk of “silo-creep”. The right governance closes the gap between what IT departments think the business requires and what the business thinks the IT department is able to deliver. Organizations need to have a better understanding of the value delivered by IT and the multiple vendor partners leveraged…some of whom are ushered in through business stakeholders.

Because organizations are relying more and more on new technology, executive leadership must be more aware of critical IT risks and how they are being managed. Take for example our communications transition story from earlier…if there is a lack of clarity and transparency when making such a significant IT decision, the transition project may stall or fail, thereby putting the business at risk and, in this case, patients lives at risk. That has a crippling impact on the broader business and future considerations for the right new technologies to be leveraged.

Conclusion: the right IT Governance is critical to optimizing business outcomes

Perot Back in IT Services

MAKES MAJOR INVESTMENT IN GUIDEIT

Plano, TX – Monday, February 2, 2015 – GuideIT, a Plano-based provider of technology optimization services, today announced that the Perot family has increased their investment in the company to become its largest shareholder. GuideIT, newly branded as A Perot Company, welcomes Ross Perot, Jr. as a member of the board.

Corporate portrait session with Ross Perot, Ross Perot Jr, and the founders/executive from GuideIT; taken in the front foyer of Ross Perot Sr's office in Plano Texas

Back Row: Chuck Lyles, CEO  |  John Furniss, Vice President  |  Scott Barnes, Board Member  |  Tim Morris, Vice President  |  John Lyon, CFO

Front Row: Ross Perot, Jr., Board Member  |  H Ross Perot  |  Russell Freeman, Board Member

“Through EDS and Perot Systems, my family has played a major role in shaping the IT services industry,” said Perot, Jr. “GuideIT has fostered a great entrepreneurial spirit and a strong commitment in delivering customer results in a rapidly growing organization. I look forward to building a great company.”

GuideIT has a suite of solutions and an engagement approach tailored for today’s business environment and technology issues.  The company’s revenue more than tripled in 2014.

“We are building a next-generation services company based on timeless services industry principles,” said Chuck Lyles, CEO.  “We are honored to be associated with the Perot family who are known for their commitment to excellent customer service, outstanding business management and the highest ethical standards.”

GuideIT offers services that help customers optimize their technology environments. Primary offerings include consultative services such as technology vendor management, project management, enterprise assessments, and a suite of deployment and managed services. By deploying these solutions in a collaborative, flexible engagement approach, customers achieve tangible business results.

About GuideIT

As a provider of technology optimization services, we believe doing technology right is the difference between leaders and the rest. We help companies lead.
Through a collaborative and easy-to-do-business-with approach, the company helps customers align IT operations in meeting their strategic business needs, better govern and manage the cost of IT, and effectively navigate change in technology.

Media Contact

James Fuller
Public Strategies, Inc.
214-613-0028
jfuller@pstrategies.com

MultiSourcing…A Critical Strategy for Aligning IT with the Business Mission

Authored by Chuck Lyles, CEO @ GuideIT

A growing trend in IT Services is the implementation of strategies designed to migrate IT operations from a single provider to an environment leveraging multiple specialty companies. As the market matures, this trend can better enable CIO's in executing strategically, driving greater effectiveness and efficiency in operations.

So what are the high level benefits and outcomes of multi-sourcing?

The right multi-sourcing strategy allows IT teams to dilute risk with partners who specialize in a particular discipline or technology.  Additionally, this type of strategy facilitates greater flexibility enabling the internal agility necessary for adapting to changing priorities…a consistent theme in supporting the broader business mission. Specialized firms are more responsive to customer needs, more motivated to consistently drive innovation, and better at implementing disruptive technologies that drive effectiveness through more automation.

What are some of the challenges and potential pitfalls?

Accountability. Yes multi-sourcing is a critical approach for leveraging IT in supporting the needs of the business. Yet to be truly strategic in this approach, leaders must require accountability. Fail to create an environment of accountability in execution, and the strategy isn’t worth the paper it’s written on. Another challenge…Simplicity. A “multi” approach by definition, yet absent of sound strategy, has the potential to introduce complexity and silos into your environment. So what’ the answer for ensuring accountability and simplicity in your multi-sourcing approach? Clear purpose, aligned incentives, and shared values. Easy to say; tough to do. More on this in future posts.

What’s your perspective on multi-source strategies?

SPARC Logical Domains: Alternate Service Domains Part 3

In Part One of this series, we went through the initial configuration of our Logical Domain hypervisor and took some time to explain the process of mapping out the PCI Root Complexes, so that we would be able to effectively split them between the primary and an alternate domain.

In Part Two of this series we took the information from Part One and split out our PCI Root Complexes and we configured and installed an alternate domain.  We were also able to reboot the primary domain without impacting the operation of the alternate domain.

In Part Three (this article) we will be creating redundant virtual services as well as some guests that will use the redundant services that we created, and will go through some testing to see the capabilities of this architecture.  At the end of this article, we will be able to reboot either the primary or alternate domain without it having an impact on any of the running guests.

Create Redundant Virtual Services

So at this point, we have a fully independent I/O Domain named alternate.  This is great for some use cases, however, if we don’t enable it to be a Service Domain as well then we won’t be able to extend that independence to our Guest Domains.  This will require that we create Virtual Services for each of these critical components of a domain.

We previously created a primary-vds0, and that will suit us just fine, however, we will also need an alternate-vds0.

# ldm add-vdiskserver primary-vds0 primary<br />

# ldm add-vdiskserver alternate-vds0 alternate

We did not provision any Virtual Switches previously as we had no need of it since we had handed out physical NICs directly to primary and alternate.  Here we will create both primary-vsw0 and alternate-vsw0.

# ldm add-vswitch net-dev=net0 primary-vsw0 primary<br />

# ldm add-vswitch net-dev=net0 alternate-vsw0 alternate

To connect to the console of LDOMs we must have a virtual console concentrator.  This should have been setup previously to install the alternate domain.

# ldm add-vconscon port-range=5000-5100 primary-vcc0 primary

Now let’s save our setting since we have setup the services.

# ldm add-config redundant-virt-services

With our progress saved we can move on.

Creating Multipath Storage Devices

In order to utilize the redundancy of LDM, we will need to create redundant virtual disk devices.  The key difference here is that we will need to specify a mpgroup.

# ldm add-vdsdev mpgroup=san01-fc primary-backend ldm1-disk0@primary-vds0

And now the same device, using the alternate domain.

# ldm add-vdsdev mpgroup=san01-fc alternate-backend ldm1-disk0@alternate-vds0

Now another thing to notice, is that when using multiple protocols on the same SAN it is important to have a different mpgroup, this is because you can have failures in the interconnect layers, that don’t affect other protocols.  Case in point a failure of the FC fabric wouldn’t affect the availability of NFS services. So those failures need to be monitored separately. The jury is still out where the line should be drawn in terms of what goes into a single mpgroup.  As I was testing live migration it seems to be more effective to use the VM and the protocol as the boundary, as it checks the mpgroup for a number of members on both sides as part of its check. So, in this case, it might be ldm1-fc and ldm1-nfs.

# ldm add-vdsdev mpgroup=san01-nfs primary-backend ldm1-disk1@primary-vds0

Again the same device for the alternate domain.

# ldm add-vdsdev mpgroup=san01-nfs alternate-backend ldm1-disk1@alternate-vds0

Now we are ready to support the domain, next, we will create the domain and assign the disk resources.  Important to note, that we do not assign BOTH disk resources, only the primary. The mpgroup will take care of the redundancy.

# ldm add-domain ldm1<br />

# ldm set-vcpu 16 ldm1<br />

# ldm set-memory 16G ldm1<br />

# ldm add-vdisk disk0 ldm1-disk0@primary-vds0 ldm1

In the next section we will create some redundant network interfaces.

Creating Redundant Guest Networking

Redundant networking is really not any different than non-redundant networking, we simply create two VNICs, one on  primary-vsw0 and the other on alternate-vsw0. Once provisioned we create an IPMP interface inside of the guest. I theory you could use DLMP as well, though I haven’t tested this option.

# ldm add-vnet vnet0 primary-vsw0 ldm1<br />

# ldm add-vnet vnet1 alternate-vsw0 ldm1

Inside of the guest we now need to bind, start, and install.

# ldm bind ldm1<br />

# ldm start ldm1

I am assuming that you know how to install Solaris, as you already would have done so at least twice to get to this point.  Now time to configure networking. If you need help with configuring networking see the following articles.

Solaris 11: Network Configuration Basics

Solaris 11: Network Configuration Advanced

ldm1# ipadm create-ip net0<br />

ldm1# ipadm create-ip net1<br />

ldm1# ipadm create-ipmp -i net0 -i net1 ipmp0<br />

ldm1# ipadm create-addr -T static -a 192.168.1.11/24 ipmp0/v4<br />

ldm1# route -p add default 192.168.1.1

Now at this point, you will have all the pieces in place to have redundant guests.  Now it is time to do some rolling reboots of the primary and alternate domains and ensure your VM stays up and running.  Inside the guest, the only thing that is amiss is you will see ipmp members go into a failed state, and then come back up as the services are restored.

One final note.  From the ilom if you issue a -> stop SYS this will shutdown the physical hardware, which is both domains and all guests.

SPARC Logical Domains: Alternate Service Domains Part 2

In Part One of this series, we went through the initial configuration of our Logical Domain hypervisor and took some time to explain the process of mapping out the PCI Root Complexes, so that we would be able to effectively split them between the primary and an alternate domain.

In Part Two (this article) we are going to take that information and split out our PCI Root Complexes and configure and install an alternate domain.  At the end of this article, you will be able to reboot the primary domain without impacting the operation of the alternate domain.

In Part Three we will be creating redundant virtual services as well as some guests that will use the redundant services that we created, and will go through some testing to see the capabilities of this architecture.

Remove PCI Roots From Primary

The changes that we need to make will require that we put LDM into dynamic reconfiguration mode, which will require a reboot to implement the changes.  This mode also prevents further changes to other domains.

# ldm start-reconf primary<br />

Initiating a delayed reconfiguration operation on the primary domain.<br />

All configuration changes for other domains are disabled until the primary<br />

domain reboots, at which time the new configuration for the primary domain<br />

will also take effect.

Now we remove the unneeded PCI Roots from the primary domain, this will allow us to assign them to the alternate domain.

# ldm remove-io pci_1 primary<br />

------------------------------------------------------------------------------<br />

Notice: The primary domain is in the process of a delayed reconfiguration.<br />

Any changes made to the primary domain will only take effect after it reboots.<br />

------------------------------------------------------------------------------<br />

# ldm remove-io pci_3 primary<br />

------------------------------------------------------------------------------<br />

Notice: The primary domain is in the process of a delayed reconfiguration.<br />

Any changes made to the primary domain will only take effect after it reboots.<br />

------------------------------------------------------------------------------

Lets save our configuration.

# ldm add-config reduced-io

Now a reboot to make the configuration active.

# reboot

When it comes back up we should see the PCI Roots unassigned.

Create Alternate Domain

Now we can create our alternate domain and assign it some resources.

# ldm add-domain alternate<br />

# ldm set-vcpu 16 alternate<br />

# ldm set-memory 16G alternate

We have set this with 2 cores and 16GB of RAM.  Your sizing will depend on your use case.

Add PCI Devices to Alternate Domain

We are assigning pci_1 and pci_3 to the alternate domain, this will have direct access to two of the on-board NICs, two of the disks, and half of the PCI slots.  It also will inherit the CDROM as well as the USB controller.

Also really quick I just wanted to point this out quickly.  The disks are not split evenly, pci_0 has 4 disks, while pci_3 only has two.  So that said if your configuration included 6 disks then I would recommend using the third and fourth in the primary as non-redundant storage pool, perhaps to be used to stage firmware and such for patching.  But the bottom line is that you need to purchase the hardware with 4 drives minimum.

# ldm add-io pci_1 alternate<br />

# ldm add-io pci_3 alternate

Here we have NICs and disks on our alternate domain, now we just need something to boot from and we can get the install going.

Lets save our config before moving on.

# ldm add-config alternate-domain

With the config saved we can move on to the next steps.

Install Alternate Domain

We should still have our CD in from the install of the primary domain.  After switching the PCI Root Complexes the CD drive will be presented to the alternate domain (as it is attached to pci_3).

First thing to do is bind our domain.

# ldm bind alternate

Then we need to start the domain.

# ldm start alternate

We need to do is determine what port telnet is listening on for this particular domain.  In our case we can see it is 5000.

# ldm ls<br />

NAME STATE FLAGS CONS VCPU MEMORY UTIL NORM UPTIME<br />

primary active -n-cv- UART 16 16G 0.2% 0.2% 17h 32m<br />

alternate active -n--v- 5000 16 16G 0.0% 0.0% 17h 45m

When using these various consoles you always need to be attentive to the escape sequence, which in the case of telnet it is ^], which is “CTRL” + “]” once we have determined where we can telnet to, then we can start the connection.  Also important to note. You will see ::1: Connection refused. This is because we are connecting to localhost, if you don’t want to see that error connect to 127.0.0.1 (which is the IPv4 local address).

# telnet localhost 5000<br />

Trying ::1...<br />

telnet: connect to address ::1: Connection refused<br />

Trying 127.0.0.1...<br />

Connected to AK00176306.<br />

Escape character is '^]'.</p>

<p>Connecting to console &quot;alternate&quot; in group &quot;alternate&quot; ....<br />

Press ~? for control options ..</p>

<p>telnet&gt; quit<br />

Connection to AK00176306 closed.

I will let you go through the install on your own, but I am assuming that you know how to install the OS itself.

Now let's save our config, so that we don’t lose our progress.

# ldm add-config alternate-domain-config

At this point, if we have done everything correctly, we can reboot the primary domain without disrupting service to the alternate domain.  Doing pings during a reboot will show illustrate where we are in the build. Of course, you would have to have networking configured on the alternate domain, and don’t forget the simple stuff like mirroring your rpool and such, it would be a pity to go to all this trouble to not have a basic level of redundancy such as mirrored disks.

Test Redundancy

At this point, the alternate and the primary domain are completely independent.  To validate this I recommend setting up a ping to both the primary and the alternate domain and rebooting the primary.  If done correctly then you will not lose any pings to the alternate domain. Keep in mind that while the primary is down you will not be able to utilize the “control domain” in other words the only one which can configure and start/stop other domains.

SPARC Logical Domains: Alternate Service Domains Part 1

In this series, we will be going over configuring alternate I/O and Service domains, with the goal of increasing the serviceability the SPARC T-Series servers without impacting other domains on the hypervisor.  Essentially enabling rolling maintenance without having to rely on live migration or downtime. It is important to note, that this is not a cure-all, for example, base firmware updates would still be interruptive, however minor firmware such as disk and I/O cards only should be able to be rolled.

In Part One we will go through the initial Logical Domain configuration, as well as mapping out the devices we have and if they will belong in the primary or the alternate domain.

In Part Two we will go through the process of creating the alternate domain and assigning the devices to it, thus making it independent of the primary domain.

In Part Three we will create redundant services to support our Logical Domains as well as create a test Logical Domain to utilize these services.

Initial Logical Domain Configuration

I am going to assume that your configuration is currently at the factory default and that you like me are using Solaris 11.2 on the hypervisor.

# ldm ls<br />

NAME STATE FLAGS CONS VCPU MEMORY UTIL NORM UPTIME<br />

primary active -n-cv- UART 256 511G 0.4% 0.3% 6h 24m

The first thing we need to do is remove some of the resources from the primary domain, so that we are able to assign them to other domains.  Since the primary domain is currently active and using these resources we will enable delayed reconfiguration mode, this will accept all changes, and then on a reboot of that domain (in this case primary which is the control domain – or the physical machine) it will enable the configuration.

# ldm start-reconf primary<br />

Initiating a delayed reconfiguration operation on the primary domain.<br />

All configuration changes for other domains are disabled until the primary<br />

domain reboots, at which time the new configuration for the primary domain<br />

will also take effect.

Now we can start reclaiming some of those resources, I will assign 2 cores to the primary domain and 16GB of RAM.

# ldm set-vcpu 16 primary<br />

------------------------------------------------------------------------------<br />

Notice: The primary domain is in the process of a delayed reconfiguration.<br />

Any changes made to the primary domain will only take effect after it reboots.<br />

------------------------------------------------------------------------------<br />

ldm set-memory 16G primary<br />

------------------------------------------------------------------------------<br />

Notice: The primary domain is in the process of a delayed reconfiguration.<br />

Any changes made to the primary domain will only take effect after it reboots.<br />

------------------------------------------------------------------------------

I like to add configurations often when we are doing a lot of changes.

# ldm add-config reduced-resources

Next we will need some services to allow us to provision disks to domains and to connect to the console of domains for the purposes of installation or administration.

# ldm add-vdiskserver primary-vds0 primary<br />

------------------------------------------------------------------------------<br />

Notice: The primary domain is in the process of a delayed reconfiguration.<br />

Any changes made to the primary domain will only take effect after it reboots.<br />

------------------------------------------------------------------------------<br />

# ldm add-vconscon port-range=5000-5100 primary-vcc0 primary<br />

------------------------------------------------------------------------------<br />

Notice: The primary domain is in the process of a delayed reconfiguration.<br />

Any changes made to the primary domain will only take effect after it reboots.<br />

------------------------------------------------------------------------------

Let's add another configuration to bookmark our progress.

# ldm add-config initial-services

We need to enable the Virtual Network Terminal Server service, this allows us to telnet from the control domain into the other domains.

# svcadm enable vntsd

Finally a reboot will put everything into action.

# reboot

When the system comes back up we should see a drastically different LDM configuration.

Identify PCI Root Complexes

All the T5-2’s that I have looked at have been laid out the same, with the SAS HBA and onboard NIC on pci_0 and pci_2, and the PCI Slots on pci_1 and pci_3.  So to split everything evenly pci_0 and pci_1 stay with the primary, while pci_2 and pci_3 go to the alternate. However so that you understand how we know this I will walk you through identifying the complex as well as the discreet types of devices.

# ldm ls -l -o physio primary</p>

<p>NAME<br />

primary</p>

<p>IO<br />

DEVICE PSEUDONYM OPTIONS<br />

pci@340 pci_1<br />

pci@300 pci_0<br />

pci@3c0 pci_3<br />

pci@380 pci_2<br />

pci@340/pci@1/pci@0/pci@4 /SYS/MB/PCIE5<br />

pci@340/pci@1/pci@0/pci@5 /SYS/MB/PCIE6<br />

pci@340/pci@1/pci@0/pci@6 /SYS/MB/PCIE7<br />

pci@300/pci@1/pci@0/pci@4 /SYS/MB/PCIE1<br />

pci@300/pci@1/pci@0/pci@2 /SYS/MB/SASHBA0<br />

pci@300/pci@1/pci@0/pci@1 /SYS/MB/NET0<br />

pci@3c0/pci@1/pci@0/pci@7 /SYS/MB/PCIE8<br />

pci@3c0/pci@1/pci@0/pci@2 /SYS/MB/SASHBA1<br />

pci@3c0/pci@1/pci@0/pci@1 /SYS/MB/NET2<br />

pci@380/pci@1/pci@0/pci@5 /SYS/MB/PCIE2<br />

pci@380/pci@1/pci@0/pci@6 /SYS/MB/PCIE3<br />

pci@380/pci@1/pci@0/pci@7 /SYS/MB/PCIE4

This shows us that pci@300 = pci_0, pci@340 = pci_1, pci@380 = pci_2, and pci@3c0 = pci_3.

Map Local Disk Devices To PCI Root

First we need to determine which disk devices are in the zpool, so that we know which ones that cannot be removed.

# zpool status rpool<br />

pool: rpool<br />

state: ONLINE<br />

scan: resilvered 70.3G in 0h8m with 0 errors on Fri Feb 21 05:56:34 2014<br />

config:</p>

<p>NAME STATE READ WRITE CKSUM<br />

rpool ONLINE 0 0 0<br />

mirror-0 ONLINE 0 0 0<br />

c0t5000CCA04385ED60d0 ONLINE 0 0 0<br />

c0t5000CCA0438568F0d0 ONLINE 0 0 0</p>

<p>errors: No known data errors

Next we must use mpathadm to find the Initiator Port Name.  To do that we must look at slice 0 of c0t5000CCA04385ED60d0.

# mpathadm show lu /dev/rdsk/c0t5000CCA04385ED60d0s0<br />

Logical Unit: /dev/rdsk/c0t5000CCA04385ED60d0s2<br />

mpath-support: libmpscsi_vhci.so<br />

Vendor: HITACHI<br />

Product: H109060SESUN600G<br />

Revision: A606<br />

Name Type: unknown type<br />

Name: 5000cca04385ed60<br />

Asymmetric: no<br />

Current Load Balance: round-robin<br />

Logical Unit Group ID: NA<br />

Auto Failback: on<br />

Auto Probing: NA</p>

<p>Paths:<br />

Initiator Port Name: w5080020001940698<br />

Target Port Name: w5000cca04385ed61<br />

Override Path: NA<br />

Path State: OK<br />

Disabled: no</p>

<p>Target Ports:<br />

Name: w5000cca04385ed61<br />

Relative ID: 0

Our output shows us that the initiator port is w5080020001940698.

# mpathadm show initiator-port w5080020001940698<br />

Initiator Port: w5080020001940698<br />

Transport Type: unknown<br />

OS Device File: /devices/pci@300/pci@1/pci@0/pci@2/scsi@0/iport@1<br />

Initiator Port: w5080020001940698<br />

Transport Type: unknown<br />

OS Device File: /devices/pci@300/pci@1/pci@0/pci@2/scsi@0/iport@2<br />

Initiator Port: w5080020001940698<br />

Transport Type: unknown<br />

OS Device File: /devices/pci@300/pci@1/pci@0/pci@2/scsi@0/iport@8<br />

Initiator Port: w5080020001940698<br />

Transport Type: unknown<br />

OS Device File: /devices/pci@300/pci@1/pci@0/pci@2/scsi@0/iport@4

So we can see that this particular disk is on pci@300, which is pci_0.

Map Ethernet Cards To PCI Root

First we must determine the underlying device for each of our network interfaces.

# dladm show-phys net0<br />

LINK MEDIA STATE SPEED DUPLEX DEVICE<br />

net0 Ethernet up 10000 full ixgbe0

In this case ixgbe0, we can then look at the device tree to see where it is pointing to to find which PCI Root this device is connected to.

# ls -l /dev/ixgbe0<br />

lrwxrwxrwx 1 root root 53 Feb 12 2014 /dev/ixgbe0 -&gt; ../devices/pci@300/pci@1/pci@0/pci@1/network@0:ixgbe0

Now we can see that it is using pci@300, which translates into pci_0.

Map Infiniband Cards to PCI Root

Again let's determine the underlying device name of our infiniband interfaces, on my machine they were defaulted at net2 and net3, however, I had previously renamed the link to ib0 and ib1 for simplicity.  This procedure is very similar to Ethernet cards.

# dladm show-phys ib0<br />

LINK MEDIA STATE SPEED DUPLEX DEVICE<br />

ib0 Infiniband up 32000 unknown ibp0

In this case our device is ibp0.  So now we just check the device tree.

# ls -l /dev/ibp0<br />

lrwxrwxrwx 1 root root 83 Nov 26 07:17 /dev/ibp0 -&gt; ../devices/pci@380/pci@1/pci@0/pci@5/pciex15b3,673c@0/hermon@0/ibport@1,0,ipib:ibp0

We can see by the path, that this is using pci@380 which is pci_2.

Map Fibre Channel Cards to PCI Root

Now perhaps we need to have some Fibre Channel HBA’s split up as well, first thing we must do is look at the cards themselves.

# luxadm -e port<br />

/devices/pci@300/pci@1/pci@0/pci@4/SUNW,qlc@0/fp@0,0:devctl NOT CONNECTED<br />

/devices/pci@300/pci@1/pci@0/pci@4/SUNW,qlc@0,1/fp@0,0:devctl NOT CONNECTED

We can see here that these use pci@300 which is pci_0.

The Plan

Basically we are going to split our PCI devices by even and odd, with even staying with Primary and odd going with Alternate.  On the T5-2, this will result on the PCI-E cards on the left side being for the primary, and the cards on the right for the alternate.

Here is a diagram of how the physical devices are mapped to PCI Root Complexes.

FIGURE 1.1 – Oracle SPARC T5-2 Front View

FIGURE 1.2 – Oracle SPARC T5-2 Rear View

References

SPARC T5-2 I/O Root Complex Connections – https://docs.oracle.com/cd/E28853_01/html/E28854/pftsm.z40005601508415.html

SPARC T5-2 Front Panel Connections – https://docs.oracle.com/cd/E28853_01/html/E28854/pftsm.bbgcddce.html#scrolltoc

SPARC T5-2 Rear Panel Connections – https://docs.oracle.com/cd/E28853_01/html/E28854/pftsm.bbgdeaei.html#scrolltoc

SPARC Logical Domains: Live Migration

One of the ways that we are able to accomplish regularly scheduled maintenance is by utilizing Live Migration, with this we can migrate workloads from one physical machine to another without having service interruption.  The way that it is done with Logical Domains is much more flexible than with most other hypervisor solutions, it doesn’t require any complicated cluster setup, no management layer, so you could literally utilize any compatible hardware at the drop of the hat.

This live migration article also focuses on some technology that I have written on, but not yet published (should be published within the next week), this technology is Alternate Service Domains, if you are using this then Live Migration is still possible, and if you are not using it, then Live Migration is actually easier (as the underlying devices are simpler, so it is simpler to match them).

Caveats to Migration

  • Virtual Devices must be accessible on both servers, via the same service name (though the underlying paths may be different).
  • IO Domains cannot be live migrated.
  • Migrations can be either online “live” or offline “cold” the state of the domain determines if it is live or cold.
  • When doing a cold migration virtual devices are not checked to ensure they exist on the receiving end, you will need to check this manually.

Live Migration Dry Run

I recommend performing a dry run of any migration prior to performing the actual migration.  This will highlight any configuration problems prior to the migration happening.

# ldm migrate-domain -n ldom1 root@server<br />

Target Password:

This will generate any errors that would generate in an actual migration, however it will do so without actually causing you problems.

Live Migration

When you are ready to perform the migration then remove the dry run flag.  This process will also do the appropriate safety checks to ensure that everything is good on the receiving end.

# ldm migrate-domain ldom1 root@server<br />

Target Password:

Now the migration will proceed and unless something happens it will come up on the other system.

Live Migration With Rename

We can also rename the logical domain as part of the migration, we simply specify the new name.

# ldm migrate-domain ldom1 root@server:ldom2<br />

Target Password:

In this case, the original name was ldom1 and the new name is ldom2.

Common Errors

Here are some common errors.

Bad Password or No LDM on Target

# ldm migrate-domain ldom1 root@server<br />

Target Password:<br />

Failed to establish connection with ldmd(1m) on target: server<br />

Check that the 'ldmd' service is enabled on the target machine and<br />

that the version supports Domain Migration. Check that the 'xmpp_enabled'<br />

and 'incoming_migration_enabled' properties of the 'ldmd' service on<br />

the target machine are set to 'true' using svccfg(1M).

Probable Fixes – Ensure you are attempting to migrate to the correct hypervisor, you have the username/password combination correct, and that the user has the appropriate level of access to ldmd and that ldmd is running.

Missing Virtual Disk Server Devices

# ldm migrate-domain ldom1 root@server<br />

Target Password:<br />

The number of volumes in mpgroup 'zfs-ib-nfs' on the target (1) differs<br />

from the number on the source (2)<br />

Domain Migration of LDom ldom1 failed

Probable Fixes – Ensure that the underlying virtual disk devices match, if you are using mpgroups, then the entire mpgroup must match on both sides.

Missing Virtual Switch Device

# ldm migrate-domain ldom1 root@server<br />

Target Password:<br />

Failed to find required vsw alternate-vsw0 on target machine<br />

Domain Migration of LDom logdom1 failed

Probable Fixes – Ensure that the underlying virtual switch devices match on both locations.

Check Migration Progress

One thing to keep in mind is that during the migration process, the hypervisor that is being evacuated is the authoritative one in terms of controlling the process, so status should be checked there.

source# ldm list -o status ldom1<br />

NAME<br />

logdom1 </p>

<p>STATUS<br />

 OPERATION PROGRESS TARGET<br />

 migration 20% 172.16.24.101:logdom1

It can however be checked on the receiving end, though it will look a little bit different.

target# ldm list -o status logdom1<br />

NAME<br />

logdom1</p>

<p>STATUS<br />

 OPERATION PROGRESS SOURCE<br />

 migration 30% ak00176306-primary

The big thing to notice is that it shows the source on this side, also if we changed the name as part of the migration it will also show the name using the new name.

Cancel Migration

Of course, if you need to cancel a migration, this would be done on the hypervisor that is being evacuated, since it is authoritative.

# ldm cancel-operation migration ldom1<br />

Domain Migration of ldom1 has been canceled

This will allow you to cancel any accidentally started migrations, however likely anything that you needed to cancel would generate an error before needing to do this.

Cross CPU Considerations

By default, logical domains are created to use very specific CPU features based on the hardware it runs on, as such live migration only works by default on the exact same CPU type and generation.  However, if we change the CPU

Native – Allows migration between same CPU type and generation.

Generic – Allows the most generic processor feature set to allow for widest live migration capabilities.

Migration Class 1 – Allows migration between T4, T5 and M5 server classes (also supports M10 depending on firmware version)

SPARC64 Class 1 – Allows migration between Fujitsu M10 servers.

Here is an example of how you would change the CPU architecture of a domain.  I personally recommend using this sparingly and building your hardware infrastructure in a way where you have the capacity on the same generation of hardware, however, in certain circumstances this can make a lot of sense if the performance implications are not too great.

# ldm set-domain cpu-arch=migration-class1 ldom1

I personally wouldn’t count on the Cross-CPU functionality, however, in some cases it might make sense for your situation, either way, Live Migration of Logical Domains is done in a very effective manner and adds a lot of value.

Solaris 11: Configure IP Over Infiniband Devices

In this article we will be going over the configuration of an infiniband interface with the IPoIB protocol on Solaris 11, specifically Solaris 11.2 (previous versions of Solaris 11 should work the same, however, there have been changes in the ipadm and dladm commands).

Identify Infiniband Datalinks

First we need to identify the underlying interfaces of the infiniband interfaces.  In my case net2 and net3.

# dladm show-phys<br />

LINK MEDIA STATE SPEED DUPLEX DEVICE<br />

net1 Ethernet unknown 0 unknown ixgbe1<br />

net0 Ethernet up 1000 full ixgbe0<br />

net2 Infiniband up 32000 unknown ibp0<br />

net3 Infiniband up 32000 unknown ibp1<br />

net5 Ethernet up 1000 full vsw0

Another way to confirm the infiniband interfaces is to use the show-ib command.

# dladm show-ib<br />

LINK HCAGUID PORTGUID PORT STATE GWNAME GWPORT PKEYS<br />

net2 10E0000128EBC8 10E0000128EBC9 1 up kel01-gw01 0a-eth-1 7FFF,FFFF<br />

 kel01-gw02 0a-eth-1<br />

net3 10E0000128EBC8 10E0000128EBCA 2 up kel01-gw01 0a-eth-1 7FFF,FFFF<br />

 kel01-gw02 0a-eth-1

Rename Infiniband Datalinks

I like to rename the datalinks to ib0 and ib1, it makes it easier to keep everything nice and tidy.

# dladm rename-link net2 ib0<br />

# dladm rename-link net3 ib1

Now to show the updated datalinks.

# dladm show-phys<br />

LINK MEDIA STATE SPEED DUPLEX DEVICE<br />

net1 Ethernet unknown 0 unknown ixgbe1<br />

net0 Ethernet up 1000 full ixgbe0<br />

ib0 Infiniband up 32000 unknown ibp0<br />

ib1 Infiniband up 32000 unknown ibp1<br />

net5 Ethernet up 1000 full vsw0

Now in subsequent actions we will use ib0 and ib1 as our datalinks.

Create Infiniband Partition

First, let's talk about partitions, partitions are most closely related to VLANs.  However the purpose of partitions is to provide isolated segments, so there is no concept of a “router” on IB.  So your use case might be for isolating storage or database services or even isolating customers from one another (which you definitely should do if you have a multitenant environment where customers have access to the operating system.  So what we want to do is identify the partition to be created, if you do not use IB partitioning, then you will need to use the “default” partition of ffff.

# dladm create-part -l ib0 -P 0xffff pffff.ib0

If you do use partitioning, then you will need to define the partition that you wish to use, for this example 7fff.  Which partition to use is determined by the dladm show-ib output, it lists the PKEY that are available, these are partitions.

# dladm create-part -l ib0 -P 0x7fff p7fff.ib0

Now lets review the partitions.

# dladm show-part<br />

LINK PKEY OVER STATE FLAGS<br />

pffff.ib0 FFFF ib0 unknown ----<br />

p7fff.ib0 7FFF ib0 unknown ----

We now have our two partitions defined.

Create IP Interfaces

Now that we have the Infiniband pieces configured, we simply create the IP interfaces, so that we can subsequently assign an IP address, the IP interfaces are named as follows (ibpartition.interfacename).  Below is for the “default” partition.

# ipadm create-ip pffff.ib0

And for our named partition for 7fff we create an interface as well.

# ipadm create-ip p7fff.ib0

Now we have our interfaces configured correctly.

Create IP Address

Now the easy part this is exactly the same as we would do with a standard ethernet interface.  Assign a static IP address for the default partition.

# ipadm create-addr -T static -a 10.1.10.11/24 pffff.ib0/v4

Also for our named partition.

# ipadm create-addr -T static -a 10.2.10.11/24 p7fff.ib0/v4

Now a few ping tests and we are in business.  Remember you will not be able to ping from one partition to another, so you will need to identify a few endpoints on your existing Infiniband networks to test your configuration.

Adventures in ZFS: Mirrored Rpool

It always makes sense to have a mirrored rpool for your production systems, however, that is not always how they are configured.  This really simple procedure is also critical.

Create a Mirrored Zpool

Check the existing devices to identify the one currently in use.

# zpool status rpool<br />

  pool: rpool<br />

 state: ONLINE<br />

 scan: none requested<br />

config:</p>

<p> NAME STATE READ WRITE CKSUM<br />

 rpool ONLINE 0 0 0<br />

 c0t5000CCA0436359CCd0 ONLINE 0 0 0</p>

<p>errors: No known data errors

Once we know which one is currently in use, we need to find a different one to mirror onto.

# format<br />

Searching for disks...done</p>

<p>AVAILABLE DISK SELECTIONS:<br />

  1. c0t5000CCA0436359CCd0 &lt;HITACHI-H109030SESUN300G-A606-279.40GB&gt;<br />

 /scsi_vhci/disk@g5000cca0436359cc<br />

 /dev/chassis/SPARC_T5-2.AK00176306/SYS/SASBP/HDD0/disk<br />

  1. c0t5000CCA043650CD8d0 &lt;HITACHI-H109030SESUN300G-A31A cyl 46873 alt 2 hd 20 sec 625&gt; solaris<br />

 /scsi_vhci/disk@g5000cca043650cd8<br />

 /dev/chassis/SPARC_T5-2.AK00176306/SYS/SASBP/HDD1/disk<br />

Specify disk (enter its number):

Then we can build our mirrored rpool, this part is exactly the same as creating a mirror for any other zpool.

# zpool attach rpool c0t5000CCA0436359CCd0 c0t5000CCA043650CD8d0<br />

vdev verification failed: use -f to override the following errors:<br />

/dev/dsk/c0t5000CCA043650CD8d0s0 contains a ufs filesystem.<br />

/dev/dsk/c0t5000CCA043650CD8d0s6 contains a ufs filesystem.<br />

Unable to build pool from specified devices: device already in use

Now in some cases, the new disk will have an existing file system on it, in that case we will need to force it, however please use caution when using force, this could cause you problems if you have multiple zpools on a system.

# zpool attach -f rpool c0t5000CCA0436359CCd0 c0t5000CCA043650CD8d0<br />

Make sure to wait until resilver is done before rebooting.

Now that will start the resilvering process, and we must wait for that to finish completely before rebooting.  So depending on the size of your disks it might be time for coffee or lunch.

# zpool status rpool<br />

 pool: rpool<br />

 state: DEGRADED<br />

status: One or more devices is currently being resilvered. The pool will<br />

 continue to function in a degraded state.<br />

action: Wait for the resilver to complete.<br />

 Run 'zpool status -v' to see device specific details.<br />

 scan: resilver in progress since Fri Nov 28 10:11:03 2014<br />

 224G scanned<br />

 6.67G resilvered at 160M/s, 2.86% done, 0h23m to go<br />

config:</p>

<p> NAME STATE READ WRITE CKSUM<br />

 rpool DEGRADED 0 0 0<br />

 mirror-0 DEGRADED 0 0 0<br />

 c0t5000CCA0436359CCd0 ONLINE 0 0 0<br />

 c0t5000CCA043650CD8d0 DEGRADED 0 0 0 (resilvering)</p>

<p>errors: No known data errors

Lets check again and see if this is nearly ready.

# zpool status rpool<br />

pool: rpool<br />

state: ONLINE<br />

scan: resilvered 224G in 0h27m with 0 errors on Fri Nov 28 10:38:25 2014<br />

config:</p>

<p>NAME STATE READ WRITE CKSUM<br />

rpool ONLINE 0 0 0<br />

mirror-0 ONLINE 0 0 0<br />

c0t5000CCA0436359CCd0 ONLINE 0 0 0<br />

c0t5000CCA043650CD8d0 ONLINE 0 0 0</p>

<p>errors: No known data errors

Now if you are just trying to mirror any zpool that is the end of it.  However if this is rpool then your mirror will not be worth anything if it doesn’t include the boot blocks.

Install Boot Blocks on SPARC

If your system is SPARC, you will use the installboot utility to install the boot blocks on the disk to ensure you will be able to boot from it in the event of primary disk failure.

# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t5000CCA043650CD8d0s0<br />

WARNING: target device /dev/rdsk/c0t5000CCA043650CD8d0s0 has a versioned bootblock but no versioning information was provided.<br />

bootblock version installed on /dev/rdsk/c0t5000CCA043650CD8d0s0 is more recent or identical<br />

Use -f to override or install without the -u option

Again if this disk is not brand new it might have existing boot blocks on it which we will need to force overwrite.

# installboot -f -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t5000CCA043650CD8d0s0

This wraps it up for a SPARC installation, it, of course, makes sense to test booting to the second disk as well.

Install Boot Blocks on x86

If you are using an x86 system, then you will need to use the installgrub utility.

# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c0t5000CCA043650CD8d0s0

There you have it.  We have successfully mirrored our x86 system as well.

Linux KVM: Bridging a Bond on CentOS 6.5

Today we are going to hop back into the KVM fray, and take a  look at using CentOS as a hypervisor., and configuring very resilient network connections to support our guests.  Of course these instructions should be valid on Red Hat Linux and Oracle Linux as well, though there is a little more to be done around getting access to the repos on those distributions…

Enable Bonding

I am assuming this is a first build for you, so this step might not be applicable, but it won’t hurt anything.

# modprobe --first-time bonding

Configure the Physical Interfaces

In our example we will be using two physical interfaces, eth0 and eth1.  Here are the interface configuration files.

# cat /etc/sysconfig/network-scripts/ifcfg-eth0<br />

DEVICE=eth0<br />

HWADDR=XX:XX:XX:XX:XX:XX<br />

ONBOOT=yes<br />

NM_CONTROLLED=no<br />

BOOTPROTO=none<br />

MASTER=bond0<br />

SLAVE=yes<br />

USERCTL=no

# cat /etc/sysconfig/network-scripts/ifcfg-eth1<br />

DEVICE=eth1<br />

HWADDR=XX:XX:XX:XX:XX:XX<br />

ONBOOT=yes<br />

NM_CONTROLLED=no<br />

BOOTPROTO=none<br />

MASTER=bond0<br />

SLAVE=yes<br />

USERCTL=no

Configure the Bonded Interface

Here we are going to bond the interfaces together, which will increase the resiliency of the interface.

# cat /etc/sysconfig/network-scripts/ifcfg-bond0<br />

DEVICE=bond0<br />

ONBOOT=yes<br />

NM_CONTROLLED=no<br />

BOOTPROTO=none<br />

USERCTL=no<br />

BONDING_OPTS=&quot;mode=1 miimon=100&quot;<br />

BRIDGE=br0

Configure the Bridge

The final step is to configure the bridge itself, which is what KVM creates the vNIC on to allow for guest network communication.

# cat /etc/sysconfig/network-scripts/ifcfg-br0<br />

DEVICE=br0<br />

TYPE=Bridge<br />

ONBOOT=yes<br />

NM_CONTROLLED=no<br />

BOOTPROTO=none<br />

USERCTL=no<br />

IPADDR=192.168.1.10<br />

NETMASK=255.255.255.0<br />

GATEWAY=192.168.1.1<br />

DELAY=0

Service Restart

Finally the easy part.  Now one snag I ran into.  If you created IP addresses on bond0, then you will have a tough time getting rid of that with a service restart alone.  I found it was easier to reboot the box itself.

# service network restart

BlackBerry OS 10: Caldav Setup with Zimbra

I have owned my Blackberry Z10, going on a year now, and I have absolutely loved it.  However, one of the issues that I have fought was in integrating it with my Zimbra Installation.  Email was easy, the IMAP protocol sorted that out easily enough… However, calendars turned out to be more of a challenge than I expected.

Here is the versions that I validated these steps on.

  • Blackberry Z10 with 10.2.1.2977
  • Zimbra Collaboration Server 8.5.0

Here is how to get it done.

Figure 1-1 – System Settings

Figure 1-1 gets us started, I am assuming that you know how to find the settings on BB10, but once there go into the Accounts section.

Figure 1-2 – Accounts

Figure 1-2 is a listing of all of the existing accounts, with mine obfuscated, of course, however, we are going to be adding another one, so we select Add Account.

Figure 1-3 – Add Accounts

You can see above in Figure 1-3, that we don’t use the “Subscribed Calendar” selection, but instead go to Advanced.  When I used Subscribed Calendar, it was never able to successfully perform a synchronization.

Figure 1-4 – Advanced Setup

In Figure 1-4 we are selecting CalDAV as the type of Account to use.  Also a little footnote, I was unable to get CardDAV working. I will provide an update or another article if I find a way around this.

Figure 1-5 – CalDAV Settings

In Figure 1-5 we are populating all of the information needed to make a connection.  Please keep in mind, that we need to use user@domain.tld for the username, and the Server Address should be in the following format:  https://zimbra.domain.tld/dav/user@domain.tld/Calendar. The important bits here are (1) https – I suspect http works as well, but I did not validate (2) username – the username is a component of the URI, this makes it a little tough to implement for less sophisticated users (3) Calendar – the default calendar for all Zimbra users is named “Calendar” – with a capital “C” not sure if you can have calendars with other names, but this is the name needed for most situations.

Now set your password and sync interval and you should be ready to go.

IT Trends, Change and The Future…A Conversation With an Industry Veteran

As a technology and healthcare centric marketing firm, we at illumeture work with emerging companies in achieving more right conversations with right people. Part of that work comes in learning and sharing the thought leadership and subject matter expertise of our clients with the right audiences. Mark Johnson is Vice President with GuideIT responsible for Account Operations and Delivery.  Prior to joining GuideIT, Mark spent 23 years with Perot Systems and Dell, the last 6 years leading business development teams tasked with solutioning, negotiating and closing large healthcare IT services contracts.  We sat down with Mark for his perspective on what CIOs should be thinking about today. 

Q:  You believe that a number of fundamental changes are affecting how CIOs should be thinking about both how they consume and deliver IT services – can you explain?

A:  Sure.  At a high level, start with the growing shift from sole-source IT services providers to more of a multi-sourcing model.  A model in which CIOs ensure they have the flexibility to choose among a variety of application and services providers, while maintaining the ability to retain those functions that make sense for a strategic or financial reason.  The old sourcing model was often binary, you either retained the service or gave it to your IT outsourcing vendor.  Today’s environment demands a third option:  the multi-source approach, or what we at GuideIT call “Flex-Sourcing”.

Q:  What’s driving that demand?

A:  A number of trends, some of which are industry specific.  But two that cross all industries are the proliferation of Software as a Service in the market, and cloud computing moving from infancy to adolescence.

Q:  Software as a Service isn’t new.

A:  No it isn’t.  But we’re moving from early adopters like salesforce.com to an environment where new application providers are developing exclusively for the cloud, and existing providers are executing to a roadmap to get there.  And not just business applications; hosted PBX is a great example of what used to be local infrastructure moving to a SaaS model in the cloud.  Our service desk telephony is hosted by one of our partners – OneSource, and we’re working closely with them to bring hosted PBX to our customers.  E-mail is another great example.  In the past I’d tee up email as a service to customers, usually either Gmail or Office365, but rarely got traction.  Now you see organizations looking hard at either a 100% SaaS approach for email, or in the case of Exchange, a hybrid model where organizations classify their users, with less frequent users in the cloud, and super-users hosted locally.  GuideIT uses Office365 exclusively, yet I still have thick-client Outlook on my PC and the OWA application on both my iPhone and Windows tablet.  That wasn’t the case not all that long ago and I think we take that for granted.

Q:  And you think cloud computing is growing up?

A:  Well it’s still in grade school, but yes, absolutely.  Let’s look at what’s happened in just a few short years, specifically with market leaders such as Amazon, Microsoft and Google.  We’ve gone from an environment of apprehension, with organizations often limiting use of these services for development and test environments, to leading application vendors running mission critical applications in the cloud, and being comfortable with both the performance/availability and the security of those environments.  On top of that, these industry leaders are, if you’ll excuse the comparison, literally at war with each other to drive down cost, directly benefiting their customers.  We’re a good ways away from a large organization being able to run 100% in the cloud, but the shift is on.  CIOs have to ensure they are challenging the legacy model and positioning their organizations to benefit from both the performance and flexibility of these environments, but just as importantly the cost. 

Q:  How do they do that?

A:  A good place to start is an end to end review of their infrastructure and application strategy to produce a roadmap that positions their organization to ride this wave, not be left behind carrying the burden of legacy investments.  Timing is critical; the pace of change in IT today is far more rapid than the old mainframe or client-server days and this process takes planning.  That said, this analysis should not be just about a multi-year road-map.  The right partner should be able to make recommendations around tactical initiatives, the so-called “low-hanging fruit” that will generate immediate cost savings, and help fund your future initiatives.  Second, is to be darn sure you don’t lock yourself into long-term contracts with hosting providers, or if you do ensure you retain contractual flexibility that goes well beyond contract bench-marking.  You have to protect yourself from the contracting model where vendors present your pricing in an “as a service” model, but are really just depreciating capital purchased on your behalf in the background.  You might meet your short-term financial objectives, but I promise in short order you’ll realize you left money on the table.  At Guide IT we’re so confident in what we can deliver that if a CIO engages GuideIT for an enterprise assessment, and isn’t happy with the results, they don’t pay.

Q:  You’ve spent half your career in healthcare – how do you see these trends you’ve discussed affecting the continuity of care model?

A:  Well we could chat about just that topic for quite some time.  My “ah-ha moments” tend to come from personal experience.  I’ll give you two examples.  Recently I started wearing a FitBit that syncs with my iPhone.  On a good day, the device validates my daily physical activity; but to be honest, too often reminds me that I need to do a better job of making exercise a mandatory part of my day.  Today that data is only on my smartphone – tomorrow it could be with my family physician, in my PHR, or even with my insurer to validate wellness premium discounts.  The “internet of things” is here and you just know these activity devices are the tip of the iceberg.  Your infrastructure and strategy roadmap have to be flexible enough to meet today’s requirements, but also support what we all know is coming, and in many cases what we don’t know is coming.  Today’s environment reminds me of the early thin client days that placed a premium on adopting a services-oriented architecture.

Second is my experience with the DNA sequencing service 23andme.com.  I found my health and ancestry data fascinating, and though the FDA has temporarily shut down the health data portion of the service, there will come a day very soon that we’ll view the practice of medicine without genome data as akin to the days without antibiotics and MRIs.  Just as they are doing with the EMR Adoption Model, CIOs should ask themselves where they’re at on the Healthcare Analytics Adoption Model and what their plan is to move to the advanced stages - the ones beyond reimbursement.  A customer of mine remarked the other day that what’s critical about the approach to analytics is not “what is the answer?” but rather “what is the question?”  And he’s right.

Voyage Linux: Dialog Error with Apt

This can happen on other Linux distributions, however, in this case, I found it on Voyage Linux, which is a Linux distribution for embedded hardware.

The Error

Here we are dealing with an annoyance whenever you use apt-get or aptitude.

debconf: unable to initialize frontend: Dialog<br />

debconf: (No usable dialog-like program is installed, so the dialog-based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76, &lt;&gt; line 1.)<br />

debconf: falling back to frontend: Readline

The Fix

Simply install dialog, which is the package it is not finding.  This will no longer need the failback to readline.

# apt-get install dialog

Once the dialog package has been installed the issue will no longer occur on subsequent runs of apt-get or aptitude.

Voyage Linux: Locale Error with Apt

Voyage Linux is an embedded linux distribution.  I use it on some ALIX boards I have lying around, it is very stripped down, and as such there are a few annoyances which we have to fix.

The Error

This issue happens when attempting to install/upgrade packages using apt-get or aptitude.

perl: warning: Setting locale failed.<br />

perl: warning: Please check that your locale settings:<br />

    LANGUAGE = (unset),<br />

    LC_ALL = (unset),<br />

    LANG = &quot;en_US.utf8&quot;<br />

are supported and installed on your system.<br />

perl: warning: Falling back to the standard locale (&quot;C&quot;).

The Fix

We simply need to set the locales to use en_US.UTF-8 or whichever locale is correct for your situation.

# locale-gen --purge en_US.UTF-8<br />

# echo &quot;LANG=en_US.UTF-8&quot; &gt;&gt; /etc/default/locale<br />

# update-locale

Now subsequent runs of apt-get or aptitude will no longer generate the error.

Adventures in ZFS: Splitting a Zpool
SQL Developer Crash on Fedora 20

I ran into a painful issue on Fedora 20 with SQL Developer.  Basically every time it was launched via the shortcut it would go through loading, and then disappear.

Manual Invocation of SQL Developer

When launching it via the script itself it gives us a little more information.

$ /opt/sqldeveloper/sqldeveloper.sh</p>

<p>Oracle SQL Developer<br />

Copyright (c) 1997, 2013, Oracle and/or its affiliates. All rights reserved.</p>

<p>&amp;nbsp;</p>

<p>LOAD TIME : 279#<br />

# A fatal error has been detected by the Java Runtime Environment:<br />

#<br />

# SIGSEGV (0xb) at pc=0x00000038a1e64910, pid=12726, tid=140449865832192<br />

#<br />

# JRE version: Java(TM) SE Runtime Environment (7.0_40-b43) (build 1.7.0_40-b43)<br />

# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.0-b56 mixed mode linux-amd64 compressed oops)<br />

# Problematic frame:<br />

# C 0x00000038a1e64910<br />

#<br />

# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try &quot;ulimit -c unlimited&quot; before starting Java again<br />

#<br />

# An error report file with more information is saved as:<br />

# /opt/sqldeveloper/sqldeveloper/bin/hs_err_pid12726.log<br />

[thread 140449881597696 also had an error]<br />

#<br />

# If you would like to submit a bug report, please visit:<br />

# http://bugreport.sun.com/bugreport/crash.jsp<br />

#<br />

/opt/sqldeveloper/sqldeveloper/bin/../../ide/bin/launcher.sh: line 611: 12726 Aborted (core dumped) ${JAVA} &quot;${APP_VM_OPTS[@]}&quot; ${APP_ENV_VARS} -classpath ${APP_CLASSPATH} ${APP_MAIN_CLASS} &quot;${APP_APP_OPTS[@]}&quot;

I also noticed, that while executing as root it worked.  However that clearly isn’t the “solution”

Fixing the Problem

Here we need to remove the GNOME_DESKTOP_SESSION_ID as part of the script.

$ cat /opt/sqldeveloper/sqldeveloper.sh<br />

#!/bin/bash<br />

unset -v GNOME_DESKTOP_SESSION_ID<br />

cd &quot;`dirname $0`&quot;/sqldeveloper/bin &amp;&amp; bash sqldeveloper $*

Once this was completed, SQL Developer launched clean for me.

 

No post found
No post found
Banking Institution Improves Security Management & Response

A publicly traded financial firm was seeking to better manage security requirements facing the business. Disparate systems within the IT environment required constant updating as new security patches were released, exposing the company to the risk of falling short of regulatory requirements.

GuideIT designed and implemented a patch management process to address ongoing updates within the environments. The patch management solution identified and updated over 130,000 security patches in the first 6 months.

GuideIT also provided a dedicated Incident Response Analyst to triage alerts and escalations, addressing a critical gap within the security organization. Working with the CISO, the analyst evaluated the infrastructure, policies and procedures, recommended improvements, and improved response time with alerting, reporting, and remediation.

End User Protection for Large Campus-Style Retail Environment

GuideIT provides strategic cybersecurity partnership to a campus-style commercial retail environment through consulting, infrastructure, and end-user protection security solutions to implement a defense-in- depth security strategy and position the organization for the future.

The Customer

A sprawling, campus-style retail environment routinely serves over one million annual visitors. The IT infrastructure has become an increasingly important component of the operations touching everything from facilities operations to customer care and internal communications. As the organization continues to grow, new technologies will further enhance operations and marketing outreach as it seeks to expand the customer base.

The Challenge

The organization recently sought a strategic technology partner to provide a comprehensive managed security solution protecting users and the IT environment from risks related to malware, ransomware, email threats, and critical security updates. It faced numerous challenges related to implementing and managing a defense-in-depth cybersecurity strategy.

An aging infrastructure and application environment paired with a lack of internal resources led to a struggle on the part of the organization to keep pace with a changing threat landscape and cybersecurity best practices. The customer realized that email in particular represented significant risk due to the ever-increasing volume of spam and potentially dangerous attachments at the email threat vector. non-technical end users did not have the proper training or awareness to protect the organization, leading to increased risk of a potentially damaging attack.

The existing security solution did NOT:

» Actively monitor the environment
» Centrally manage patches and updates
» Enable scalability & adaptability
» Provide for remote management & Maintenance

GuideIT Cyber Security solutions safeguard organizations against malicious cyber threats. We utilize individualized approach to provide comprehensive protection that aligns with industry best practices. GuideIT end-user protection enables defense-in-depth strategies for end-user devices such as laptops, desktops and mobile devices which are targeted by malicious actors to gain access into enterprise networks.

The Solution

GuideIT developed a solution to holistically address shortcomings of the aging infrastructure and application environment with a fully managed approach. Comprehensive management and monitoring services focused on endpoint security would address the risk to the environment at the end-user attack surface. A robust strategy for patch management would ensure the environment was properly safeguarded against existing vulnerabilities with the latest updates available. Email security comprising of inbound traffic scanning, link protection, and threat quarantine, would mitigate the risk of ransomware phishing attempts, and malicious payloads. A centrally managed data protection strategy would protect against data loss with full data encryption and in browser web monitoring.

Solution Benefits

» Central management & monitoring
» End-to-end data encryption
» Web monitoring & protection
» Real-time malware protection
» Patch management and deployment
» Email link & attachment scanning
» Outbound data protection
» End user threat awareness training

Why GuideIT

IDENTIFY > PROTECT > DETECT > EDUCATE

GuideIT takes a holistic view of the security environment to evaluate the full threat landscape and identify unique vulnerabilities within an organization. Customers benefit from best-in-class security tools paired with a consultative, strategic approach. Leveraging a defense-in-depth framework that aligns with NIST best practices, the GuideIT security solutions methodology focuses on root cause analysis, visibility, and data-driven decision making to deliver an end-to-end cybersecurity strategy that hardens the IT infrastructure against attacks while also promoting security awareness within the entire organization.

GuideIT developed a comprehensive plan to transform the cybersecurity strategy with a defense-in-depth model. Levering industry best practices and the NIST framework, GuideIT assessed the landscape to identify threats and vulnerabilities, created a plan to address risks and promote awareness, and deployed solutions to secure the infrastructure and change end-user behavior, securing the IT environment.

The Implementation

1. ASSESSMENT - Upon initiation of the project, GuideIT quickly performed a comprehensive assessment of the environment to identify and evaluate legacy and stand-alone security solutions in place. High risk devices were identified and prioritized for phase one. Infrastructure and existing security postures were evaluated and tested.
2. PLANNING - With data collected from the assessment, GuideIT cybersecurity professionals developed a comprehensive plan and to address issues with patch management, end-point protection, infrastructure security, and email security.
3.DEPLOYMENT - With data collected from the assessment, agents were deployed within a week to immediately deploy the centrally managed end-point protection solution. The patching program was also deployed targeting the most critical and vulnerable devices first.

The Results

The team identified systems in the environment that had not been actively patched in over six months. The systems were updated and brought into compliance with the policy. Initially, less than 35% of the environment was current with patches released within 30 days. Since implementation of new patch management processes and tools, the environment now maintains a 30-day rolling update ratio of over 95%.

Since the deployment of managed anti-virus, over 400 threats associated with malware, exploits and attempted access have been either blocked or resolved, ensuring the endpoints and users are secure. The email security solution initially scanned over 83,000 emails effectively protecting the organization from nearly 20 different malware threats and over 50 individual phishing attempts. 27,000 links were scanned and protected, resulting in 70,000 clean messages being successfully derived during the initial deployment.

GuideIT Once Again Recognized Among Fastest Growing Private Companies by SMU Caruth Institute & Dallas Business Journal

Monday, October 26, 2020 – Plano, TX – GuideIT, a leading provider of managed IT and cloud solutions, today announced that it has once again been named one of the fastest growing entrepreneurial companies for a third year in the SMU Cox Dallas 100™ awards.

The Dallas 100, co-founded by the SMU Caruth Institute for Entrepreneurship and the Dallas Business Journal, recognizes the innovative spirit, determination and business acumen of area Dallas-area entrepreneurs.  The award focuses not only on growth, but an organization’s character and creditworthiness.

“We are once again honored to be selected for the Dallas 100.” said Chuck Lyles, CEO for GuideIT. “It demonstrates our continued commitment to bringing leading edge solutions to market. We place a high value on the entrepreneurial spirit which has contributed to the success and growth which we have experienced over the last several years.”

About GuideIT

GuideIT delivers solutions to drive business success through technology. Through consulting, managed services, digital business, and cybersecurity solutions, GuideIT partners with customers, simplifies the complex, and inspires confidence while delivering technology with an industry specific context to enable the creation of business value and create an IT experience that delivers. 

Founded in 2013 and building on a heritage that dates to the industry’s founding, GuideIT has been recognized for its service quality, positive work environment and growth.  Learn more at www.guideit.com

Healthcare Management Organization Realizes Cost Savings with AWS

Customer Profile

Our customer is a premier national provider of population healthcare management programs. For more than 40 years, they have offered value-added programs to plan sponsors that improve the overall health of engaged participants, including Integrated Clinical Solutions, Chronic Care Management, Behavioral Health Solutions, Wellness/Lifestyle Coaching, and Care Coordination.

The Challenge

Our customer was experiencing cost inefficiencies with their current server which caused them to have less flexibility and control over their solution.

The Solution

GuideIT recommended moving the customer from their current server, Armor, and moving it into AWS EC2 and AWS SE. Through this solution, the customer will realize a reduction in cost, and greater durability and recoverability.

AWS Services

  • Managed Microsoft Sequel Server (RDS)
  • AWS EC2 with Microsoft Server
  • AWS S3

Metrics for Success

  • Introduce cost savings with new AWS server
  • Increase data durability and recoverability
  • Reduce administration needs

The Result

  • Achieved greater than 30% reduction in cost through new solution
  • Successfully migrated server from Armor into a Managed Microsoft SQL Server
  • Eliminated the costly necessity of administrators manually pulling reports from the old system
  • Increased durability and recoverability through daily snapshots of AWS EC2 and AWS RDS

The Integration Architecture

  • TIBCO BusinessWorks installed on the EC2 instance retrieves Medical files from HMC clients, pushes a copy to AWS S3, processes files and pushes converted X12 data to HMC Healthworks
  • The file processes match customer data and create unique ids using Amazon RDS “Microsoft SQL Server”
  • Snapshots of AWS EC2 and AWS RDS are created daily to AWS S3
  • Recovery involves restoring snapshots and rerunning files for day

 

Introducing a New Website and Online Experience from GuideIT
Introducing a New Website and Online Experience from GuideIT

As the world of technology continues to evolve into the future at a rapid pace, so does GuideIT. We are proud to announce that our new and improved website is here to provide more functionality for your outsourced IT experience. Here are all of the ways that our revamped website is working harder to provide a new online experience for your GuideIT services:

Continuing
Education from GuideIT

Our new website provides continuing education on all of the latest trends in the IT industry from our perspective. Here, you can stay up to date on the changing world of technology by diving into the details of what makes it great. We understand that being dedicated to IT strategy and transformation means providing our clients with the details they need to succeed.

A New
Design to Match Our Services

Our new website comes complete with an updated look designed to make navigating through our information easier. Just like with our services, we want the online experience we provide our customers to be as quick, simple and efficient as possible. We respect your time and money in everything we do, and our new website is certainly no exception to that rule.

Case
Studies to Learn About Our Services

We have implemented several case studies that are aimed at helping our customers learn more about our services and understand their importance. Here, you can get an in-depth look at how GuideIT has helped a countless number of companies optimize their technology and achieve their business goals. Take a look at our new case studies today to learn about the impact our services have made for our clients.

No matter how you hope to achieve operational excellence in your business, GuideIT is here to help with the same services you know and love. From managed IT services to management consulting and all of your cyber security needs, we provide services that can help businesses of all kinds thrive. Want to learn more about how GuideIT can help you? Check out our blog today!

The Latest Trends in Information Technology

GuideIT’s very own Chuck Lyles, CEO, recently sat in on the HIMSS SoCal Podcast to discuss emerging trends in information technology and how it relates to the healthcare industry. Listen in to learn about COVID-19’s impact to the IT industry, the importance of the Clinical Service Desk and the latest outsourcing trends in technology. Click the link below to learn more.

Catalyst Health and GuideIT’s Strategic Services Relationship

GuideIT serves as Catalyst Health’s strategic IT services partner and enables better results through increased customer satisfaction, improved cost-efficiency ratios, and greater infrastructure reliability and availability. Services include clinical and technical service desk, end user support, service management, infrastructure technology operations support, network management, and information technology security support.

The Customer

Catalyst Health is a URAC-accredited clinically integrated network of primary care physicians who have come together to provide high-quality care, helping communities thrive. Catalyst Health began its network of independent primary care physicians in 2015 in North Texas. In the four short years that followed, Catalyst Health has grown to nearly 1,000 primary care providers, with over 300 office locations, and 100 care team members, serving over one million patients. To date, Catalyst Health has saved more than $55 million for the communities it serves. Catalyst Health coordinates care, improves health, and lowers cost – creating sustainable and predictable value.

The Challenge

To support the rapid growth they were experiencing, Catalyst Health needed to transform their current Information Technology environment. The organization was building a new care management platform and expanding upon their existing professional service offerings to independent physician practices. Support of these initiatives would require remediating their current environment as the existing infrastructure support model was too costly.

The organization was seeking a partnership with a Managed Services provider to aid in implementing and supporting a 24x7 scalable model that would improve overall customer satisfaction, provide greater alignment to the business owners, and reduce overall cost as growth occurred. To achieve success of these initiatives, the organization would need to address the following:

  • Implement a high availability infrastructure to minimize downtime and service interruptions
  • Greater focus on end users and responsiveness with Service Level metrics and continuous improvement to support caregivers across the organization
  • Implement ITIL-based best practice standards across the organization that align IT services with the needs of the business
  • Improve cost efficiency ratio as growth occurs

“The integration of technology has been a vital part of Catalyst’s growth, driving our innovation and allowing us to accomplish our mission of helping communities thrive. GuideIT’s strategic direction has not only made our internal team more connected but has also allowed the physicians in our network to strengthen their relationships with their patients, all while saving everyone time and money. It’s been a win-win situation for all”
- Dr. Christopher Crow

The Solution

Catalyst Health determined the best approach to achieve the objectives of the business expansion would be to engage GuideIT to tap into their Managed Services solutions that would assume IT leadership and provide subject matter experts. GuideIT would deliver a solution that encompasses infrastructure management, monitoring, end user support, clinical applications service desk, technical service desk, vendor management, call center technology support, and security services. This would provide Catalyst Health with the environment to deploy a new Electronic Medical Record platform which will enable greater access to clinical data for caregivers and offer improved responsiveness while improving the long-term health of their patients. Goals of the IT partnership would include:

  • Stabilization of the enterprise infrastructure through Change Management and Best Practice adoption
  • Implementation of IT roadmap and modernization that included a new EMR platform
  • Greater control of IT cost as a percentage of total revenue that would generate cost savings
  • Business stakeholders prioritize IT initiatives for greater focus on success that would drive greater business results

Why GuideIT

With GuideIT’s focus on healthcare expertise combined with its technology capabilities to manage a customers support requirements; a set of best practices and processes would be deployed to provide an improved result for Catalyst Health’s technology environment. GuideIT would operationalize a set of technology metrics to allow for greater transparency of performance, resiliency, and predictable results for the organization.

The best practice approach would create the foundation of operational excellence for Catalyst Health’s IT environment to achieve greater business results as well as on-time delivery and within budget. The underlying cost structure converted from a fixed to variable cost structure to support the scalability and allowed to realize a lower expense cost ratio as quality improved. Having access to critical skill sets that otherwise would be difficult to hire and retain would be of additional value to the organization.

The Implementation

GuideIT began with a consultative approach that included fully understanding the unique business model and support needs of Catalyst Health and its customers. Services were built around nine distinct areas: Infrastructure Management and Optimization, Service Desk, End User Field Support, Clinical Applications Support, Project Management, Vendor Management, Invoice Management, Security Enhancement, and Clinic Support.

1. Service Desk Management - Stakeholders identified the need to implement a more robust service desk that would aid in first call resolution for internal and external customers.
2. Infrastructure Management Transition - As the business grew, the need to support a larger, more diverse and scalable technology portfolio emerged. GuideIT assessed the environment and identified areas for immediate remediation. These included infrastructure standards procedures and performance management solutions were implemented to optimize the current exiting technology. As a part of this transition, GuideIT transitioned existing customer IT staff and filled identified gaps in skill sets with additional resources.
3. Expansion of Infrastructure Support - With continued growth and dependency on technology, Catalyst Health expanded the relationship to include 24x7 Service Desk, Clinical Applications Service Desk, and project management. This expanded scope allowed for greater end-to-end problem resolution.
4. ENHANCEMENTS TO SUPPORT TODAYS ENVIRONMENT - The events of the Pandemic in 2020 brought about new challenges and new solutions. In partnership with Catalyst Health, GuideIT responded with solutions for remote work, remote support, COVID 19 Hotline and most recently a Pharmacy Call Center.

The Results

  • Improved operational performance of IT systems with improved system availability
  • Seamless integration with the business departments to function as one-team
  • Improved IT solutions and responsiveness to the business
  • Improved efficiency cost ratios for the organization during a high growth period
  • Ability to support increased IT demand with a variable cost structure
Regional Health System to Accelerate Information Flow and Automate Back Office Processes through GuideIT

April 25, 2019 – Plano, TX – GuideIT today announced it signed a new contract to provide business intelligence solutions for a regional health system.

With the objectives of accelerating information flow and optimizing back-office processes, the health system launched an initiative to replace manual reporting that requires information from multiple sources, including its EMR.  GuideIT will integrate critical data sources into a common platform, apply business logic and develop the visualizations necessary to meet the health system’s management objectives.

“In healthcare, there is an opportunity to strengthen patient care and operating performance through greater and more timely access to information,” said Chuck Lyles, CEO for GuideIT. “Healthcare providers have more information about their patients and businesses than ever before.  At GuideIT, our healthcare and data specialists help healthcare providers leverage this information to produce tangible business accomplishments.”

GuideIT Digital Business solutions, which incorporate Digital Transformation, Business Intelligence and Digital Workplace, help organizations to operate more efficiently, convert ideas for creating new business value a reality, and facilitate a dynamic, anytime-anyplace business environment.

About GuideIT

GuideIT provides IT services that make technology contribute to business success. Through its consulting, managed IT, digital business, and cyber security solutions and the way it partners with customers, simplifies the complex, and inspires confidence, GuideIT utilizes technology in an industry context to enable the creation of business value and create an IT experience that delivers. Founded in 2013 and part of a heritage that dates to the industry’s founding, GuideIT has been recognized for its service quality, positive work environment and growth. More information is available at www.guideit.com.

Risk and Security Management Solutions Provider Modernizes Go-To-Market Application

A leading provider of risk and security management solutions needed to re-write and modernize its core go-to-market application.  GuideIT collaborated with the organization on defining its business requirements and developed the new application utilizing a hybrid agile/waterfall development method and continues to enhance the product leveraging agile sprint and release cycles.  The application, with its modern interface, and improved features and functionality, helped the customer expand their subscriber base by more than 95% in a 20-month period.

How to Protect Your Business From the Growing Complexity of Email-Based Security Attacks

The Threat Landscape

Organizations face a growing frequency and complexity of email-based security threats as the predominance of targeted attacks begin with an email. Advanced malware delivery, phishing and domain and identity spoofing can penetrate the primary layer of security provided as part of the email service and damage your business. With the increasing complexity of attacks, relying solely upon base security features and employee training is no longer adequate. Additionally, the types of organizations receiving these email attacks is expanding to include not only large and well-known businesses, but also small businesses because of a perception that there will be fewer security layers.

Our Approach

With GuideIT Advanced Email Protection you receive the extra security necessary to address this growing threat. We provide a service configurable to the level of protection you seek that is priced on a variable, per mailbox basis. Based on the requirements established, which encompass the level of protection, filter rules and user parameters, we implement and operate the advanced protection, while also providing you visibility into the threat environment and actions to protect your business.

How It Works

We implement a protective shield monitored by security experts in which all email traffic is routed through. Inbound messages are checked against know fraudulent and dangerous URL’s and email addresses, while attachments are scanned for malware. When an incoming email is flagged, it is blocked, quarantined and the GuideIT security team notified. We then work with your team to revise the protective rules as necessary for your business. All outbound messages are scanned to ensure that Personally Identifiable Information (PII) and Protected Health Information (PHI) do not leave the organization accidentally or maliciously.

Read next: How to Protect Your End User Devices from COVID-19 Phishing Attacks

How You Will Benefit

Through our Advanced Email Protection solution, you will realize:

  • Greater protection from advanced email threats
  • Increased visibility into the threats being experienced
  • Enhanced email encryption and data loss prevention
  • Extended protection to social media accounts
  • Better compliance and discovery readiness

Contact us to get started today.

Banking Institution Improves Security Management & Response

A publicly traded financial firm was seeking to better manage security requirements facing the business. Disparate systems within the IT environment required constant updating as new security patches were released, exposing the company to the risk of falling short of regulatory requirements.

GuideIT designed and implemented a patch management process to address ongoing updates within the environments. The patch management solution identified and updated over 130,000 security patches in the first 6 months.

GuideIT also provided a dedicated Incident Response Analyst to triage alerts and escalations, addressing a critical gap within the security organization. Working with the CISO, the analyst evaluated the infrastructure, policies and procedures, recommended improvements, and improved response time with alerting, reporting, and remediation.

End User Protection for Large Campus-Style Retail Environment

GuideIT provides strategic cybersecurity partnership to a campus-style commercial retail environment through consulting, infrastructure, and end-user protection security solutions to implement a defense-in- depth security strategy and position the organization for the future.

The Customer

A sprawling, campus-style retail environment routinely serves over one million annual visitors. The IT infrastructure has become an increasingly important component of the operations touching everything from facilities operations to customer care and internal communications. As the organization continues to grow, new technologies will further enhance operations and marketing outreach as it seeks to expand the customer base.

The Challenge

The organization recently sought a strategic technology partner to provide a comprehensive managed security solution protecting users and the IT environment from risks related to malware, ransomware, email threats, and critical security updates. It faced numerous challenges related to implementing and managing a defense-in-depth cybersecurity strategy.

An aging infrastructure and application environment paired with a lack of internal resources led to a struggle on the part of the organization to keep pace with a changing threat landscape and cybersecurity best practices. The customer realized that email in particular represented significant risk due to the ever-increasing volume of spam and potentially dangerous attachments at the email threat vector. non-technical end users did not have the proper training or awareness to protect the organization, leading to increased risk of a potentially damaging attack.

The existing security solution did NOT:

» Actively monitor the environment
» Centrally manage patches and updates
» Enable scalability & adaptability
» Provide for remote management & Maintenance

GuideIT Cyber Security solutions safeguard organizations against malicious cyber threats. We utilize individualized approach to provide comprehensive protection that aligns with industry best practices. GuideIT end-user protection enables defense-in-depth strategies for end-user devices such as laptops, desktops and mobile devices which are targeted by malicious actors to gain access into enterprise networks.

The Solution

GuideIT developed a solution to holistically address shortcomings of the aging infrastructure and application environment with a fully managed approach. Comprehensive management and monitoring services focused on endpoint security would address the risk to the environment at the end-user attack surface. A robust strategy for patch management would ensure the environment was properly safeguarded against existing vulnerabilities with the latest updates available. Email security comprising of inbound traffic scanning, link protection, and threat quarantine, would mitigate the risk of ransomware phishing attempts, and malicious payloads. A centrally managed data protection strategy would protect against data loss with full data encryption and in browser web monitoring.

Solution Benefits

» Central management & monitoring
» End-to-end data encryption
» Web monitoring & protection
» Real-time malware protection
» Patch management and deployment
» Email link & attachment scanning
» Outbound data protection
» End user threat awareness training

Why GuideIT

IDENTIFY > PROTECT > DETECT > EDUCATE

GuideIT takes a holistic view of the security environment to evaluate the full threat landscape and identify unique vulnerabilities within an organization. Customers benefit from best-in-class security tools paired with a consultative, strategic approach. Leveraging a defense-in-depth framework that aligns with NIST best practices, the GuideIT security solutions methodology focuses on root cause analysis, visibility, and data-driven decision making to deliver an end-to-end cybersecurity strategy that hardens the IT infrastructure against attacks while also promoting security awareness within the entire organization.

GuideIT developed a comprehensive plan to transform the cybersecurity strategy with a defense-in-depth model. Levering industry best practices and the NIST framework, GuideIT assessed the landscape to identify threats and vulnerabilities, created a plan to address risks and promote awareness, and deployed solutions to secure the infrastructure and change end-user behavior, securing the IT environment.

The Implementation

1. ASSESSMENT - Upon initiation of the project, GuideIT quickly performed a comprehensive assessment of the environment to identify and evaluate legacy and stand-alone security solutions in place. High risk devices were identified and prioritized for phase one. Infrastructure and existing security postures were evaluated and tested.
2. PLANNING - With data collected from the assessment, GuideIT cybersecurity professionals developed a comprehensive plan and to address issues with patch management, end-point protection, infrastructure security, and email security.
3.DEPLOYMENT - With data collected from the assessment, agents were deployed within a week to immediately deploy the centrally managed end-point protection solution. The patching program was also deployed targeting the most critical and vulnerable devices first.

The Results

The team identified systems in the environment that had not been actively patched in over six months. The systems were updated and brought into compliance with the policy. Initially, less than 35% of the environment was current with patches released within 30 days. Since implementation of new patch management processes and tools, the environment now maintains a 30-day rolling update ratio of over 95%.

Since the deployment of managed anti-virus, over 400 threats associated with malware, exploits and attempted access have been either blocked or resolved, ensuring the endpoints and users are secure. The email security solution initially scanned over 83,000 emails effectively protecting the organization from nearly 20 different malware threats and over 50 individual phishing attempts. 27,000 links were scanned and protected, resulting in 70,000 clean messages being successfully derived during the initial deployment.

GuideIT Once Again Recognized Among Fastest Growing Private Companies by SMU Caruth Institute & Dallas Business Journal

Monday, October 26, 2020 – Plano, TX – GuideIT, a leading provider of managed IT and cloud solutions, today announced that it has once again been named one of the fastest growing entrepreneurial companies for a third year in the SMU Cox Dallas 100™ awards.

The Dallas 100, co-founded by the SMU Caruth Institute for Entrepreneurship and the Dallas Business Journal, recognizes the innovative spirit, determination and business acumen of area Dallas-area entrepreneurs.  The award focuses not only on growth, but an organization’s character and creditworthiness.

“We are once again honored to be selected for the Dallas 100.” said Chuck Lyles, CEO for GuideIT. “It demonstrates our continued commitment to bringing leading edge solutions to market. We place a high value on the entrepreneurial spirit which has contributed to the success and growth which we have experienced over the last several years.”

About GuideIT

GuideIT delivers solutions to drive business success through technology. Through consulting, managed services, digital business, and cybersecurity solutions, GuideIT partners with customers, simplifies the complex, and inspires confidence while delivering technology with an industry specific context to enable the creation of business value and create an IT experience that delivers. 

Founded in 2013 and building on a heritage that dates to the industry’s founding, GuideIT has been recognized for its service quality, positive work environment and growth.  Learn more at www.guideit.com

Healthcare Management Organization Realizes Cost Savings with AWS

Customer Profile

Our customer is a premier national provider of population healthcare management programs. For more than 40 years, they have offered value-added programs to plan sponsors that improve the overall health of engaged participants, including Integrated Clinical Solutions, Chronic Care Management, Behavioral Health Solutions, Wellness/Lifestyle Coaching, and Care Coordination.

The Challenge

Our customer was experiencing cost inefficiencies with their current server which caused them to have less flexibility and control over their solution.

The Solution

GuideIT recommended moving the customer from their current server, Armor, and moving it into AWS EC2 and AWS SE. Through this solution, the customer will realize a reduction in cost, and greater durability and recoverability.

AWS Services

  • Managed Microsoft Sequel Server (RDS)
  • AWS EC2 with Microsoft Server
  • AWS S3

Metrics for Success

  • Introduce cost savings with new AWS server
  • Increase data durability and recoverability
  • Reduce administration needs

The Result

  • Achieved greater than 30% reduction in cost through new solution
  • Successfully migrated server from Armor into a Managed Microsoft SQL Server
  • Eliminated the costly necessity of administrators manually pulling reports from the old system
  • Increased durability and recoverability through daily snapshots of AWS EC2 and AWS RDS

The Integration Architecture

  • TIBCO BusinessWorks installed on the EC2 instance retrieves Medical files from HMC clients, pushes a copy to AWS S3, processes files and pushes converted X12 data to HMC Healthworks
  • The file processes match customer data and create unique ids using Amazon RDS “Microsoft SQL Server”
  • Snapshots of AWS EC2 and AWS RDS are created daily to AWS S3
  • Recovery involves restoring snapshots and rerunning files for day

 

Introducing a New Website and Online Experience from GuideIT
Introducing a New Website and Online Experience from GuideIT

As the world of technology continues to evolve into the future at a rapid pace, so does GuideIT. We are proud to announce that our new and improved website is here to provide more functionality for your outsourced IT experience. Here are all of the ways that our revamped website is working harder to provide a new online experience for your GuideIT services:

Continuing
Education from GuideIT

Our new website provides continuing education on all of the latest trends in the IT industry from our perspective. Here, you can stay up to date on the changing world of technology by diving into the details of what makes it great. We understand that being dedicated to IT strategy and transformation means providing our clients with the details they need to succeed.

A New
Design to Match Our Services

Our new website comes complete with an updated look designed to make navigating through our information easier. Just like with our services, we want the online experience we provide our customers to be as quick, simple and efficient as possible. We respect your time and money in everything we do, and our new website is certainly no exception to that rule.

Case
Studies to Learn About Our Services

We have implemented several case studies that are aimed at helping our customers learn more about our services and understand their importance. Here, you can get an in-depth look at how GuideIT has helped a countless number of companies optimize their technology and achieve their business goals. Take a look at our new case studies today to learn about the impact our services have made for our clients.

No matter how you hope to achieve operational excellence in your business, GuideIT is here to help with the same services you know and love. From managed IT services to management consulting and all of your cyber security needs, we provide services that can help businesses of all kinds thrive. Want to learn more about how GuideIT can help you? Check out our blog today!

The Latest Trends in Information Technology

GuideIT’s very own Chuck Lyles, CEO, recently sat in on the HIMSS SoCal Podcast to discuss emerging trends in information technology and how it relates to the healthcare industry. Listen in to learn about COVID-19’s impact to the IT industry, the importance of the Clinical Service Desk and the latest outsourcing trends in technology. Click the link below to learn more.

Catalyst Health and GuideIT’s Strategic Services Relationship

GuideIT serves as Catalyst Health’s strategic IT services partner and enables better results through increased customer satisfaction, improved cost-efficiency ratios, and greater infrastructure reliability and availability. Services include clinical and technical service desk, end user support, service management, infrastructure technology operations support, network management, and information technology security support.

The Customer

Catalyst Health is a URAC-accredited clinically integrated network of primary care physicians who have come together to provide high-quality care, helping communities thrive. Catalyst Health began its network of independent primary care physicians in 2015 in North Texas. In the four short years that followed, Catalyst Health has grown to nearly 1,000 primary care providers, with over 300 office locations, and 100 care team members, serving over one million patients. To date, Catalyst Health has saved more than $55 million for the communities it serves. Catalyst Health coordinates care, improves health, and lowers cost – creating sustainable and predictable value.

The Challenge

To support the rapid growth they were experiencing, Catalyst Health needed to transform their current Information Technology environment. The organization was building a new care management platform and expanding upon their existing professional service offerings to independent physician practices. Support of these initiatives would require remediating their current environment as the existing infrastructure support model was too costly.

The organization was seeking a partnership with a Managed Services provider to aid in implementing and supporting a 24x7 scalable model that would improve overall customer satisfaction, provide greater alignment to the business owners, and reduce overall cost as growth occurred. To achieve success of these initiatives, the organization would need to address the following:

  • Implement a high availability infrastructure to minimize downtime and service interruptions
  • Greater focus on end users and responsiveness with Service Level metrics and continuous improvement to support caregivers across the organization
  • Implement ITIL-based best practice standards across the organization that align IT services with the needs of the business
  • Improve cost efficiency ratio as growth occurs

“The integration of technology has been a vital part of Catalyst’s growth, driving our innovation and allowing us to accomplish our mission of helping communities thrive. GuideIT’s strategic direction has not only made our internal team more connected but has also allowed the physicians in our network to strengthen their relationships with their patients, all while saving everyone time and money. It’s been a win-win situation for all”
- Dr. Christopher Crow

The Solution

Catalyst Health determined the best approach to achieve the objectives of the business expansion would be to engage GuideIT to tap into their Managed Services solutions that would assume IT leadership and provide subject matter experts. GuideIT would deliver a solution that encompasses infrastructure management, monitoring, end user support, clinical applications service desk, technical service desk, vendor management, call center technology support, and security services. This would provide Catalyst Health with the environment to deploy a new Electronic Medical Record platform which will enable greater access to clinical data for caregivers and offer improved responsiveness while improving the long-term health of their patients. Goals of the IT partnership would include:

  • Stabilization of the enterprise infrastructure through Change Management and Best Practice adoption
  • Implementation of IT roadmap and modernization that included a new EMR platform
  • Greater control of IT cost as a percentage of total revenue that would generate cost savings
  • Business stakeholders prioritize IT initiatives for greater focus on success that would drive greater business results

Why GuideIT

With GuideIT’s focus on healthcare expertise combined with its technology capabilities to manage a customers support requirements; a set of best practices and processes would be deployed to provide an improved result for Catalyst Health’s technology environment. GuideIT would operationalize a set of technology metrics to allow for greater transparency of performance, resiliency, and predictable results for the organization.

The best practice approach would create the foundation of operational excellence for Catalyst Health’s IT environment to achieve greater business results as well as on-time delivery and within budget. The underlying cost structure converted from a fixed to variable cost structure to support the scalability and allowed to realize a lower expense cost ratio as quality improved. Having access to critical skill sets that otherwise would be difficult to hire and retain would be of additional value to the organization.

The Implementation

GuideIT began with a consultative approach that included fully understanding the unique business model and support needs of Catalyst Health and its customers. Services were built around nine distinct areas: Infrastructure Management and Optimization, Service Desk, End User Field Support, Clinical Applications Support, Project Management, Vendor Management, Invoice Management, Security Enhancement, and Clinic Support.

1. Service Desk Management - Stakeholders identified the need to implement a more robust service desk that would aid in first call resolution for internal and external customers.
2. Infrastructure Management Transition - As the business grew, the need to support a larger, more diverse and scalable technology portfolio emerged. GuideIT assessed the environment and identified areas for immediate remediation. These included infrastructure standards procedures and performance management solutions were implemented to optimize the current exiting technology. As a part of this transition, GuideIT transitioned existing customer IT staff and filled identified gaps in skill sets with additional resources.
3. Expansion of Infrastructure Support - With continued growth and dependency on technology, Catalyst Health expanded the relationship to include 24x7 Service Desk, Clinical Applications Service Desk, and project management. This expanded scope allowed for greater end-to-end problem resolution.
4. ENHANCEMENTS TO SUPPORT TODAYS ENVIRONMENT - The events of the Pandemic in 2020 brought about new challenges and new solutions. In partnership with Catalyst Health, GuideIT responded with solutions for remote work, remote support, COVID 19 Hotline and most recently a Pharmacy Call Center.

The Results

  • Improved operational performance of IT systems with improved system availability
  • Seamless integration with the business departments to function as one-team
  • Improved IT solutions and responsiveness to the business
  • Improved efficiency cost ratios for the organization during a high growth period
  • Ability to support increased IT demand with a variable cost structure
Regional Health System to Accelerate Information Flow and Automate Back Office Processes through GuideIT

April 25, 2019 – Plano, TX – GuideIT today announced it signed a new contract to provide business intelligence solutions for a regional health system.

With the objectives of accelerating information flow and optimizing back-office processes, the health system launched an initiative to replace manual reporting that requires information from multiple sources, including its EMR.  GuideIT will integrate critical data sources into a common platform, apply business logic and develop the visualizations necessary to meet the health system’s management objectives.

“In healthcare, there is an opportunity to strengthen patient care and operating performance through greater and more timely access to information,” said Chuck Lyles, CEO for GuideIT. “Healthcare providers have more information about their patients and businesses than ever before.  At GuideIT, our healthcare and data specialists help healthcare providers leverage this information to produce tangible business accomplishments.”

GuideIT Digital Business solutions, which incorporate Digital Transformation, Business Intelligence and Digital Workplace, help organizations to operate more efficiently, convert ideas for creating new business value a reality, and facilitate a dynamic, anytime-anyplace business environment.

About GuideIT

GuideIT provides IT services that make technology contribute to business success. Through its consulting, managed IT, digital business, and cyber security solutions and the way it partners with customers, simplifies the complex, and inspires confidence, GuideIT utilizes technology in an industry context to enable the creation of business value and create an IT experience that delivers. Founded in 2013 and part of a heritage that dates to the industry’s founding, GuideIT has been recognized for its service quality, positive work environment and growth. More information is available at www.guideit.com.

Risk and Security Management Solutions Provider Modernizes Go-To-Market Application

A leading provider of risk and security management solutions needed to re-write and modernize its core go-to-market application.  GuideIT collaborated with the organization on defining its business requirements and developed the new application utilizing a hybrid agile/waterfall development method and continues to enhance the product leveraging agile sprint and release cycles.  The application, with its modern interface, and improved features and functionality, helped the customer expand their subscriber base by more than 95% in a 20-month period.

How to Protect Your Business From the Growing Complexity of Email-Based Security Attacks

The Threat Landscape

Organizations face a growing frequency and complexity of email-based security threats as the predominance of targeted attacks begin with an email. Advanced malware delivery, phishing and domain and identity spoofing can penetrate the primary layer of security provided as part of the email service and damage your business. With the increasing complexity of attacks, relying solely upon base security features and employee training is no longer adequate. Additionally, the types of organizations receiving these email attacks is expanding to include not only large and well-known businesses, but also small businesses because of a perception that there will be fewer security layers.

Our Approach

With GuideIT Advanced Email Protection you receive the extra security necessary to address this growing threat. We provide a service configurable to the level of protection you seek that is priced on a variable, per mailbox basis. Based on the requirements established, which encompass the level of protection, filter rules and user parameters, we implement and operate the advanced protection, while also providing you visibility into the threat environment and actions to protect your business.

How It Works

We implement a protective shield monitored by security experts in which all email traffic is routed through. Inbound messages are checked against know fraudulent and dangerous URL’s and email addresses, while attachments are scanned for malware. When an incoming email is flagged, it is blocked, quarantined and the GuideIT security team notified. We then work with your team to revise the protective rules as necessary for your business. All outbound messages are scanned to ensure that Personally Identifiable Information (PII) and Protected Health Information (PHI) do not leave the organization accidentally or maliciously.

Read next: How to Protect Your End User Devices from COVID-19 Phishing Attacks

How You Will Benefit

Through our Advanced Email Protection solution, you will realize:

  • Greater protection from advanced email threats
  • Increased visibility into the threats being experienced
  • Enhanced email encryption and data loss prevention
  • Extended protection to social media accounts
  • Better compliance and discovery readiness

Contact us to get started today.

Banking Institution Improves Security Management & Response

A publicly traded financial firm was seeking to better manage security requirements facing the business. Disparate systems within the IT environment required constant updating as new security patches were released, exposing the company to the risk of falling short of regulatory requirements.

GuideIT designed and implemented a patch management process to address ongoing updates within the environments. The patch management solution identified and updated over 130,000 security patches in the first 6 months.

GuideIT also provided a dedicated Incident Response Analyst to triage alerts and escalations, addressing a critical gap within the security organization. Working with the CISO, the analyst evaluated the infrastructure, policies and procedures, recommended improvements, and improved response time with alerting, reporting, and remediation.

End User Protection for Large Campus-Style Retail Environment

GuideIT provides strategic cybersecurity partnership to a campus-style commercial retail environment through consulting, infrastructure, and end-user protection security solutions to implement a defense-in- depth security strategy and position the organization for the future.

The Customer

A sprawling, campus-style retail environment routinely serves over one million annual visitors. The IT infrastructure has become an increasingly important component of the operations touching everything from facilities operations to customer care and internal communications. As the organization continues to grow, new technologies will further enhance operations and marketing outreach as it seeks to expand the customer base.

The Challenge

The organization recently sought a strategic technology partner to provide a comprehensive managed security solution protecting users and the IT environment from risks related to malware, ransomware, email threats, and critical security updates. It faced numerous challenges related to implementing and managing a defense-in-depth cybersecurity strategy.

An aging infrastructure and application environment paired with a lack of internal resources led to a struggle on the part of the organization to keep pace with a changing threat landscape and cybersecurity best practices. The customer realized that email in particular represented significant risk due to the ever-increasing volume of spam and potentially dangerous attachments at the email threat vector. non-technical end users did not have the proper training or awareness to protect the organization, leading to increased risk of a potentially damaging attack.

The existing security solution did NOT:

» Actively monitor the environment
» Centrally manage patches and updates
» Enable scalability & adaptability
» Provide for remote management & Maintenance

GuideIT Cyber Security solutions safeguard organizations against malicious cyber threats. We utilize individualized approach to provide comprehensive protection that aligns with industry best practices. GuideIT end-user protection enables defense-in-depth strategies for end-user devices such as laptops, desktops and mobile devices which are targeted by malicious actors to gain access into enterprise networks.

The Solution

GuideIT developed a solution to holistically address shortcomings of the aging infrastructure and application environment with a fully managed approach. Comprehensive management and monitoring services focused on endpoint security would address the risk to the environment at the end-user attack surface. A robust strategy for patch management would ensure the environment was properly safeguarded against existing vulnerabilities with the latest updates available. Email security comprising of inbound traffic scanning, link protection, and threat quarantine, would mitigate the risk of ransomware phishing attempts, and malicious payloads. A centrally managed data protection strategy would protect against data loss with full data encryption and in browser web monitoring.

Solution Benefits

» Central management & monitoring
» End-to-end data encryption
» Web monitoring & protection
» Real-time malware protection
» Patch management and deployment
» Email link & attachment scanning
» Outbound data protection
» End user threat awareness training

Why GuideIT

IDENTIFY > PROTECT > DETECT > EDUCATE

GuideIT takes a holistic view of the security environment to evaluate the full threat landscape and identify unique vulnerabilities within an organization. Customers benefit from best-in-class security tools paired with a consultative, strategic approach. Leveraging a defense-in-depth framework that aligns with NIST best practices, the GuideIT security solutions methodology focuses on root cause analysis, visibility, and data-driven decision making to deliver an end-to-end cybersecurity strategy that hardens the IT infrastructure against attacks while also promoting security awareness within the entire organization.

GuideIT developed a comprehensive plan to transform the cybersecurity strategy with a defense-in-depth model. Levering industry best practices and the NIST framework, GuideIT assessed the landscape to identify threats and vulnerabilities, created a plan to address risks and promote awareness, and deployed solutions to secure the infrastructure and change end-user behavior, securing the IT environment.

The Implementation

1. ASSESSMENT - Upon initiation of the project, GuideIT quickly performed a comprehensive assessment of the environment to identify and evaluate legacy and stand-alone security solutions in place. High risk devices were identified and prioritized for phase one. Infrastructure and existing security postures were evaluated and tested.
2. PLANNING - With data collected from the assessment, GuideIT cybersecurity professionals developed a comprehensive plan and to address issues with patch management, end-point protection, infrastructure security, and email security.
3.DEPLOYMENT - With data collected from the assessment, agents were deployed within a week to immediately deploy the centrally managed end-point protection solution. The patching program was also deployed targeting the most critical and vulnerable devices first.

The Results

The team identified systems in the environment that had not been actively patched in over six months. The systems were updated and brought into compliance with the policy. Initially, less than 35% of the environment was current with patches released within 30 days. Since implementation of new patch management processes and tools, the environment now maintains a 30-day rolling update ratio of over 95%.

Since the deployment of managed anti-virus, over 400 threats associated with malware, exploits and attempted access have been either blocked or resolved, ensuring the endpoints and users are secure. The email security solution initially scanned over 83,000 emails effectively protecting the organization from nearly 20 different malware threats and over 50 individual phishing attempts. 27,000 links were scanned and protected, resulting in 70,000 clean messages being successfully derived during the initial deployment.

GuideIT Once Again Recognized Among Fastest Growing Private Companies by SMU Caruth Institute & Dallas Business Journal

Monday, October 26, 2020 – Plano, TX – GuideIT, a leading provider of managed IT and cloud solutions, today announced that it has once again been named one of the fastest growing entrepreneurial companies for a third year in the SMU Cox Dallas 100™ awards.

The Dallas 100, co-founded by the SMU Caruth Institute for Entrepreneurship and the Dallas Business Journal, recognizes the innovative spirit, determination and business acumen of area Dallas-area entrepreneurs.  The award focuses not only on growth, but an organization’s character and creditworthiness.

“We are once again honored to be selected for the Dallas 100.” said Chuck Lyles, CEO for GuideIT. “It demonstrates our continued commitment to bringing leading edge solutions to market. We place a high value on the entrepreneurial spirit which has contributed to the success and growth which we have experienced over the last several years.”

About GuideIT

GuideIT delivers solutions to drive business success through technology. Through consulting, managed services, digital business, and cybersecurity solutions, GuideIT partners with customers, simplifies the complex, and inspires confidence while delivering technology with an industry specific context to enable the creation of business value and create an IT experience that delivers. 

Founded in 2013 and building on a heritage that dates to the industry’s founding, GuideIT has been recognized for its service quality, positive work environment and growth.  Learn more at www.guideit.com

Healthcare Management Organization Realizes Cost Savings with AWS

Customer Profile

Our customer is a premier national provider of population healthcare management programs. For more than 40 years, they have offered value-added programs to plan sponsors that improve the overall health of engaged participants, including Integrated Clinical Solutions, Chronic Care Management, Behavioral Health Solutions, Wellness/Lifestyle Coaching, and Care Coordination.

The Challenge

Our customer was experiencing cost inefficiencies with their current server which caused them to have less flexibility and control over their solution.

The Solution

GuideIT recommended moving the customer from their current server, Armor, and moving it into AWS EC2 and AWS SE. Through this solution, the customer will realize a reduction in cost, and greater durability and recoverability.

AWS Services

  • Managed Microsoft Sequel Server (RDS)
  • AWS EC2 with Microsoft Server
  • AWS S3

Metrics for Success

  • Introduce cost savings with new AWS server
  • Increase data durability and recoverability
  • Reduce administration needs

The Result

  • Achieved greater than 30% reduction in cost through new solution
  • Successfully migrated server from Armor into a Managed Microsoft SQL Server
  • Eliminated the costly necessity of administrators manually pulling reports from the old system
  • Increased durability and recoverability through daily snapshots of AWS EC2 and AWS RDS

The Integration Architecture

  • TIBCO BusinessWorks installed on the EC2 instance retrieves Medical files from HMC clients, pushes a copy to AWS S3, processes files and pushes converted X12 data to HMC Healthworks
  • The file processes match customer data and create unique ids using Amazon RDS “Microsoft SQL Server”
  • Snapshots of AWS EC2 and AWS RDS are created daily to AWS S3
  • Recovery involves restoring snapshots and rerunning files for day

 

Introducing a New Website and Online Experience from GuideIT
Introducing a New Website and Online Experience from GuideIT

As the world of technology continues to evolve into the future at a rapid pace, so does GuideIT. We are proud to announce that our new and improved website is here to provide more functionality for your outsourced IT experience. Here are all of the ways that our revamped website is working harder to provide a new online experience for your GuideIT services:

Continuing
Education from GuideIT

Our new website provides continuing education on all of the latest trends in the IT industry from our perspective. Here, you can stay up to date on the changing world of technology by diving into the details of what makes it great. We understand that being dedicated to IT strategy and transformation means providing our clients with the details they need to succeed.

A New
Design to Match Our Services

Our new website comes complete with an updated look designed to make navigating through our information easier. Just like with our services, we want the online experience we provide our customers to be as quick, simple and efficient as possible. We respect your time and money in everything we do, and our new website is certainly no exception to that rule.

Case
Studies to Learn About Our Services

We have implemented several case studies that are aimed at helping our customers learn more about our services and understand their importance. Here, you can get an in-depth look at how GuideIT has helped a countless number of companies optimize their technology and achieve their business goals. Take a look at our new case studies today to learn about the impact our services have made for our clients.

No matter how you hope to achieve operational excellence in your business, GuideIT is here to help with the same services you know and love. From managed IT services to management consulting and all of your cyber security needs, we provide services that can help businesses of all kinds thrive. Want to learn more about how GuideIT can help you? Check out our blog today!

The Latest Trends in Information Technology

GuideIT’s very own Chuck Lyles, CEO, recently sat in on the HIMSS SoCal Podcast to discuss emerging trends in information technology and how it relates to the healthcare industry. Listen in to learn about COVID-19’s impact to the IT industry, the importance of the Clinical Service Desk and the latest outsourcing trends in technology. Click the link below to learn more.

Catalyst Health and GuideIT’s Strategic Services Relationship

GuideIT serves as Catalyst Health’s strategic IT services partner and enables better results through increased customer satisfaction, improved cost-efficiency ratios, and greater infrastructure reliability and availability. Services include clinical and technical service desk, end user support, service management, infrastructure technology operations support, network management, and information technology security support.

The Customer

Catalyst Health is a URAC-accredited clinically integrated network of primary care physicians who have come together to provide high-quality care, helping communities thrive. Catalyst Health began its network of independent primary care physicians in 2015 in North Texas. In the four short years that followed, Catalyst Health has grown to nearly 1,000 primary care providers, with over 300 office locations, and 100 care team members, serving over one million patients. To date, Catalyst Health has saved more than $55 million for the communities it serves. Catalyst Health coordinates care, improves health, and lowers cost – creating sustainable and predictable value.

The Challenge

To support the rapid growth they were experiencing, Catalyst Health needed to transform their current Information Technology environment. The organization was building a new care management platform and expanding upon their existing professional service offerings to independent physician practices. Support of these initiatives would require remediating their current environment as the existing infrastructure support model was too costly.

The organization was seeking a partnership with a Managed Services provider to aid in implementing and supporting a 24x7 scalable model that would improve overall customer satisfaction, provide greater alignment to the business owners, and reduce overall cost as growth occurred. To achieve success of these initiatives, the organization would need to address the following:

  • Implement a high availability infrastructure to minimize downtime and service interruptions
  • Greater focus on end users and responsiveness with Service Level metrics and continuous improvement to support caregivers across the organization
  • Implement ITIL-based best practice standards across the organization that align IT services with the needs of the business
  • Improve cost efficiency ratio as growth occurs

“The integration of technology has been a vital part of Catalyst’s growth, driving our innovation and allowing us to accomplish our mission of helping communities thrive. GuideIT’s strategic direction has not only made our internal team more connected but has also allowed the physicians in our network to strengthen their relationships with their patients, all while saving everyone time and money. It’s been a win-win situation for all”
- Dr. Christopher Crow

The Solution

Catalyst Health determined the best approach to achieve the objectives of the business expansion would be to engage GuideIT to tap into their Managed Services solutions that would assume IT leadership and provide subject matter experts. GuideIT would deliver a solution that encompasses infrastructure management, monitoring, end user support, clinical applications service desk, technical service desk, vendor management, call center technology support, and security services. This would provide Catalyst Health with the environment to deploy a new Electronic Medical Record platform which will enable greater access to clinical data for caregivers and offer improved responsiveness while improving the long-term health of their patients. Goals of the IT partnership would include:

  • Stabilization of the enterprise infrastructure through Change Management and Best Practice adoption
  • Implementation of IT roadmap and modernization that included a new EMR platform
  • Greater control of IT cost as a percentage of total revenue that would generate cost savings
  • Business stakeholders prioritize IT initiatives for greater focus on success that would drive greater business results

Why GuideIT

With GuideIT’s focus on healthcare expertise combined with its technology capabilities to manage a customers support requirements; a set of best practices and processes would be deployed to provide an improved result for Catalyst Health’s technology environment. GuideIT would operationalize a set of technology metrics to allow for greater transparency of performance, resiliency, and predictable results for the organization.

The best practice approach would create the foundation of operational excellence for Catalyst Health’s IT environment to achieve greater business results as well as on-time delivery and within budget. The underlying cost structure converted from a fixed to variable cost structure to support the scalability and allowed to realize a lower expense cost ratio as quality improved. Having access to critical skill sets that otherwise would be difficult to hire and retain would be of additional value to the organization.

The Implementation

GuideIT began with a consultative approach that included fully understanding the unique business model and support needs of Catalyst Health and its customers. Services were built around nine distinct areas: Infrastructure Management and Optimization, Service Desk, End User Field Support, Clinical Applications Support, Project Management, Vendor Management, Invoice Management, Security Enhancement, and Clinic Support.

1. Service Desk Management - Stakeholders identified the need to implement a more robust service desk that would aid in first call resolution for internal and external customers.
2. Infrastructure Management Transition - As the business grew, the need to support a larger, more diverse and scalable technology portfolio emerged. GuideIT assessed the environment and identified areas for immediate remediation. These included infrastructure standards procedures and performance management solutions were implemented to optimize the current exiting technology. As a part of this transition, GuideIT transitioned existing customer IT staff and filled identified gaps in skill sets with additional resources.
3. Expansion of Infrastructure Support - With continued growth and dependency on technology, Catalyst Health expanded the relationship to include 24x7 Service Desk, Clinical Applications Service Desk, and project management. This expanded scope allowed for greater end-to-end problem resolution.
4. ENHANCEMENTS TO SUPPORT TODAYS ENVIRONMENT - The events of the Pandemic in 2020 brought about new challenges and new solutions. In partnership with Catalyst Health, GuideIT responded with solutions for remote work, remote support, COVID 19 Hotline and most recently a Pharmacy Call Center.

The Results

  • Improved operational performance of IT systems with improved system availability
  • Seamless integration with the business departments to function as one-team
  • Improved IT solutions and responsiveness to the business
  • Improved efficiency cost ratios for the organization during a high growth period
  • Ability to support increased IT demand with a variable cost structure
Regional Health System to Accelerate Information Flow and Automate Back Office Processes through GuideIT

April 25, 2019 – Plano, TX – GuideIT today announced it signed a new contract to provide business intelligence solutions for a regional health system.

With the objectives of accelerating information flow and optimizing back-office processes, the health system launched an initiative to replace manual reporting that requires information from multiple sources, including its EMR.  GuideIT will integrate critical data sources into a common platform, apply business logic and develop the visualizations necessary to meet the health system’s management objectives.

“In healthcare, there is an opportunity to strengthen patient care and operating performance through greater and more timely access to information,” said Chuck Lyles, CEO for GuideIT. “Healthcare providers have more information about their patients and businesses than ever before.  At GuideIT, our healthcare and data specialists help healthcare providers leverage this information to produce tangible business accomplishments.”

GuideIT Digital Business solutions, which incorporate Digital Transformation, Business Intelligence and Digital Workplace, help organizations to operate more efficiently, convert ideas for creating new business value a reality, and facilitate a dynamic, anytime-anyplace business environment.

About GuideIT

GuideIT provides IT services that make technology contribute to business success. Through its consulting, managed IT, digital business, and cyber security solutions and the way it partners with customers, simplifies the complex, and inspires confidence, GuideIT utilizes technology in an industry context to enable the creation of business value and create an IT experience that delivers. Founded in 2013 and part of a heritage that dates to the industry’s founding, GuideIT has been recognized for its service quality, positive work environment and growth. More information is available at www.guideit.com.

Risk and Security Management Solutions Provider Modernizes Go-To-Market Application

A leading provider of risk and security management solutions needed to re-write and modernize its core go-to-market application.  GuideIT collaborated with the organization on defining its business requirements and developed the new application utilizing a hybrid agile/waterfall development method and continues to enhance the product leveraging agile sprint and release cycles.  The application, with its modern interface, and improved features and functionality, helped the customer expand their subscriber base by more than 95% in a 20-month period.

How to Protect Your Business From the Growing Complexity of Email-Based Security Attacks

The Threat Landscape

Organizations face a growing frequency and complexity of email-based security threats as the predominance of targeted attacks begin with an email. Advanced malware delivery, phishing and domain and identity spoofing can penetrate the primary layer of security provided as part of the email service and damage your business. With the increasing complexity of attacks, relying solely upon base security features and employee training is no longer adequate. Additionally, the types of organizations receiving these email attacks is expanding to include not only large and well-known businesses, but also small businesses because of a perception that there will be fewer security layers.

Our Approach

With GuideIT Advanced Email Protection you receive the extra security necessary to address this growing threat. We provide a service configurable to the level of protection you seek that is priced on a variable, per mailbox basis. Based on the requirements established, which encompass the level of protection, filter rules and user parameters, we implement and operate the advanced protection, while also providing you visibility into the threat environment and actions to protect your business.

How It Works

We implement a protective shield monitored by security experts in which all email traffic is routed through. Inbound messages are checked against know fraudulent and dangerous URL’s and email addresses, while attachments are scanned for malware. When an incoming email is flagged, it is blocked, quarantined and the GuideIT security team notified. We then work with your team to revise the protective rules as necessary for your business. All outbound messages are scanned to ensure that Personally Identifiable Information (PII) and Protected Health Information (PHI) do not leave the organization accidentally or maliciously.

Read next: How to Protect Your End User Devices from COVID-19 Phishing Attacks

How You Will Benefit

Through our Advanced Email Protection solution, you will realize:

  • Greater protection from advanced email threats
  • Increased visibility into the threats being experienced
  • Enhanced email encryption and data loss prevention
  • Extended protection to social media accounts
  • Better compliance and discovery readiness

Contact us to get started today.

Banking Institution Improves Security Management & Response

A publicly traded financial firm was seeking to better manage security requirements facing the business. Disparate systems within the IT environment required constant updating as new security patches were released, exposing the company to the risk of falling short of regulatory requirements.

GuideIT designed and implemented a patch management process to address ongoing updates within the environments. The patch management solution identified and updated over 130,000 security patches in the first 6 months.

GuideIT also provided a dedicated Incident Response Analyst to triage alerts and escalations, addressing a critical gap within the security organization. Working with the CISO, the analyst evaluated the infrastructure, policies and procedures, recommended improvements, and improved response time with alerting, reporting, and remediation.

End User Protection for Large Campus-Style Retail Environment

GuideIT provides strategic cybersecurity partnership to a campus-style commercial retail environment through consulting, infrastructure, and end-user protection security solutions to implement a defense-in- depth security strategy and position the organization for the future.

The Customer

A sprawling, campus-style retail environment routinely serves over one million annual visitors. The IT infrastructure has become an increasingly important component of the operations touching everything from facilities operations to customer care and internal communications. As the organization continues to grow, new technologies will further enhance operations and marketing outreach as it seeks to expand the customer base.

The Challenge

The organization recently sought a strategic technology partner to provide a comprehensive managed security solution protecting users and the IT environment from risks related to malware, ransomware, email threats, and critical security updates. It faced numerous challenges related to implementing and managing a defense-in-depth cybersecurity strategy.

An aging infrastructure and application environment paired with a lack of internal resources led to a struggle on the part of the organization to keep pace with a changing threat landscape and cybersecurity best practices. The customer realized that email in particular represented significant risk due to the ever-increasing volume of spam and potentially dangerous attachments at the email threat vector. non-technical end users did not have the proper training or awareness to protect the organization, leading to increased risk of a potentially damaging attack.

The existing security solution did NOT:

» Actively monitor the environment
» Centrally manage patches and updates
» Enable scalability & adaptability
» Provide for remote management & Maintenance

GuideIT Cyber Security solutions safeguard organizations against malicious cyber threats. We utilize individualized approach to provide comprehensive protection that aligns with industry best practices. GuideIT end-user protection enables defense-in-depth strategies for end-user devices such as laptops, desktops and mobile devices which are targeted by malicious actors to gain access into enterprise networks.

The Solution

GuideIT developed a solution to holistically address shortcomings of the aging infrastructure and application environment with a fully managed approach. Comprehensive management and monitoring services focused on endpoint security would address the risk to the environment at the end-user attack surface. A robust strategy for patch management would ensure the environment was properly safeguarded against existing vulnerabilities with the latest updates available. Email security comprising of inbound traffic scanning, link protection, and threat quarantine, would mitigate the risk of ransomware phishing attempts, and malicious payloads. A centrally managed data protection strategy would protect against data loss with full data encryption and in browser web monitoring.

Solution Benefits

» Central management & monitoring
» End-to-end data encryption
» Web monitoring & protection
» Real-time malware protection
» Patch management and deployment
» Email link & attachment scanning
» Outbound data protection
» End user threat awareness training

Why GuideIT

IDENTIFY > PROTECT > DETECT > EDUCATE

GuideIT takes a holistic view of the security environment to evaluate the full threat landscape and identify unique vulnerabilities within an organization. Customers benefit from best-in-class security tools paired with a consultative, strategic approach. Leveraging a defense-in-depth framework that aligns with NIST best practices, the GuideIT security solutions methodology focuses on root cause analysis, visibility, and data-driven decision making to deliver an end-to-end cybersecurity strategy that hardens the IT infrastructure against attacks while also promoting security awareness within the entire organization.

GuideIT developed a comprehensive plan to transform the cybersecurity strategy with a defense-in-depth model. Levering industry best practices and the NIST framework, GuideIT assessed the landscape to identify threats and vulnerabilities, created a plan to address risks and promote awareness, and deployed solutions to secure the infrastructure and change end-user behavior, securing the IT environment.

The Implementation

1. ASSESSMENT - Upon initiation of the project, GuideIT quickly performed a comprehensive assessment of the environment to identify and evaluate legacy and stand-alone security solutions in place. High risk devices were identified and prioritized for phase one. Infrastructure and existing security postures were evaluated and tested.
2. PLANNING - With data collected from the assessment, GuideIT cybersecurity professionals developed a comprehensive plan and to address issues with patch management, end-point protection, infrastructure security, and email security.
3.DEPLOYMENT - With data collected from the assessment, agents were deployed within a week to immediately deploy the centrally managed end-point protection solution. The patching program was also deployed targeting the most critical and vulnerable devices first.

The Results

The team identified systems in the environment that had not been actively patched in over six months. The systems were updated and brought into compliance with the policy. Initially, less than 35% of the environment was current with patches released within 30 days. Since implementation of new patch management processes and tools, the environment now maintains a 30-day rolling update ratio of over 95%.

Since the deployment of managed anti-virus, over 400 threats associated with malware, exploits and attempted access have been either blocked or resolved, ensuring the endpoints and users are secure. The email security solution initially scanned over 83,000 emails effectively protecting the organization from nearly 20 different malware threats and over 50 individual phishing attempts. 27,000 links were scanned and protected, resulting in 70,000 clean messages being successfully derived during the initial deployment.

GuideIT Once Again Recognized Among Fastest Growing Private Companies by SMU Caruth Institute & Dallas Business Journal

Monday, October 26, 2020 – Plano, TX – GuideIT, a leading provider of managed IT and cloud solutions, today announced that it has once again been named one of the fastest growing entrepreneurial companies for a third year in the SMU Cox Dallas 100™ awards.

The Dallas 100, co-founded by the SMU Caruth Institute for Entrepreneurship and the Dallas Business Journal, recognizes the innovative spirit, determination and business acumen of area Dallas-area entrepreneurs.  The award focuses not only on growth, but an organization’s character and creditworthiness.

“We are once again honored to be selected for the Dallas 100.” said Chuck Lyles, CEO for GuideIT. “It demonstrates our continued commitment to bringing leading edge solutions to market. We place a high value on the entrepreneurial spirit which has contributed to the success and growth which we have experienced over the last several years.”

About GuideIT

GuideIT delivers solutions to drive business success through technology. Through consulting, managed services, digital business, and cybersecurity solutions, GuideIT partners with customers, simplifies the complex, and inspires confidence while delivering technology with an industry specific context to enable the creation of business value and create an IT experience that delivers. 

Founded in 2013 and building on a heritage that dates to the industry’s founding, GuideIT has been recognized for its service quality, positive work environment and growth.  Learn more at www.guideit.com

Healthcare Management Organization Realizes Cost Savings with AWS

Customer Profile

Our customer is a premier national provider of population healthcare management programs. For more than 40 years, they have offered value-added programs to plan sponsors that improve the overall health of engaged participants, including Integrated Clinical Solutions, Chronic Care Management, Behavioral Health Solutions, Wellness/Lifestyle Coaching, and Care Coordination.

The Challenge

Our customer was experiencing cost inefficiencies with their current server which caused them to have less flexibility and control over their solution.

The Solution

GuideIT recommended moving the customer from their current server, Armor, and moving it into AWS EC2 and AWS SE. Through this solution, the customer will realize a reduction in cost, and greater durability and recoverability.

AWS Services

  • Managed Microsoft Sequel Server (RDS)
  • AWS EC2 with Microsoft Server
  • AWS S3

Metrics for Success

  • Introduce cost savings with new AWS server
  • Increase data durability and recoverability
  • Reduce administration needs

The Result

  • Achieved greater than 30% reduction in cost through new solution
  • Successfully migrated server from Armor into a Managed Microsoft SQL Server
  • Eliminated the costly necessity of administrators manually pulling reports from the old system
  • Increased durability and recoverability through daily snapshots of AWS EC2 and AWS RDS

The Integration Architecture

  • TIBCO BusinessWorks installed on the EC2 instance retrieves Medical files from HMC clients, pushes a copy to AWS S3, processes files and pushes converted X12 data to HMC Healthworks
  • The file processes match customer data and create unique ids using Amazon RDS “Microsoft SQL Server”
  • Snapshots of AWS EC2 and AWS RDS are created daily to AWS S3
  • Recovery involves restoring snapshots and rerunning files for day

 

Introducing a New Website and Online Experience from GuideIT
Introducing a New Website and Online Experience from GuideIT

As the world of technology continues to evolve into the future at a rapid pace, so does GuideIT. We are proud to announce that our new and improved website is here to provide more functionality for your outsourced IT experience. Here are all of the ways that our revamped website is working harder to provide a new online experience for your GuideIT services:

Continuing
Education from GuideIT

Our new website provides continuing education on all of the latest trends in the IT industry from our perspective. Here, you can stay up to date on the changing world of technology by diving into the details of what makes it great. We understand that being dedicated to IT strategy and transformation means providing our clients with the details they need to succeed.

A New
Design to Match Our Services

Our new website comes complete with an updated look designed to make navigating through our information easier. Just like with our services, we want the online experience we provide our customers to be as quick, simple and efficient as possible. We respect your time and money in everything we do, and our new website is certainly no exception to that rule.

Case
Studies to Learn About Our Services

We have implemented several case studies that are aimed at helping our customers learn more about our services and understand their importance. Here, you can get an in-depth look at how GuideIT has helped a countless number of companies optimize their technology and achieve their business goals. Take a look at our new case studies today to learn about the impact our services have made for our clients.

No matter how you hope to achieve operational excellence in your business, GuideIT is here to help with the same services you know and love. From managed IT services to management consulting and all of your cyber security needs, we provide services that can help businesses of all kinds thrive. Want to learn more about how GuideIT can help you? Check out our blog today!

The Latest Trends in Information Technology

GuideIT’s very own Chuck Lyles, CEO, recently sat in on the HIMSS SoCal Podcast to discuss emerging trends in information technology and how it relates to the healthcare industry. Listen in to learn about COVID-19’s impact to the IT industry, the importance of the Clinical Service Desk and the latest outsourcing trends in technology. Click the link below to learn more.

Catalyst Health and GuideIT’s Strategic Services Relationship

GuideIT serves as Catalyst Health’s strategic IT services partner and enables better results through increased customer satisfaction, improved cost-efficiency ratios, and greater infrastructure reliability and availability. Services include clinical and technical service desk, end user support, service management, infrastructure technology operations support, network management, and information technology security support.

The Customer

Catalyst Health is a URAC-accredited clinically integrated network of primary care physicians who have come together to provide high-quality care, helping communities thrive. Catalyst Health began its network of independent primary care physicians in 2015 in North Texas. In the four short years that followed, Catalyst Health has grown to nearly 1,000 primary care providers, with over 300 office locations, and 100 care team members, serving over one million patients. To date, Catalyst Health has saved more than $55 million for the communities it serves. Catalyst Health coordinates care, improves health, and lowers cost – creating sustainable and predictable value.

The Challenge

To support the rapid growth they were experiencing, Catalyst Health needed to transform their current Information Technology environment. The organization was building a new care management platform and expanding upon their existing professional service offerings to independent physician practices. Support of these initiatives would require remediating their current environment as the existing infrastructure support model was too costly.

The organization was seeking a partnership with a Managed Services provider to aid in implementing and supporting a 24x7 scalable model that would improve overall customer satisfaction, provide greater alignment to the business owners, and reduce overall cost as growth occurred. To achieve success of these initiatives, the organization would need to address the following:

  • Implement a high availability infrastructure to minimize downtime and service interruptions
  • Greater focus on end users and responsiveness with Service Level metrics and continuous improvement to support caregivers across the organization
  • Implement ITIL-based best practice standards across the organization that align IT services with the needs of the business
  • Improve cost efficiency ratio as growth occurs

“The integration of technology has been a vital part of Catalyst’s growth, driving our innovation and allowing us to accomplish our mission of helping communities thrive. GuideIT’s strategic direction has not only made our internal team more connected but has also allowed the physicians in our network to strengthen their relationships with their patients, all while saving everyone time and money. It’s been a win-win situation for all”
- Dr. Christopher Crow

The Solution

Catalyst Health determined the best approach to achieve the objectives of the business expansion would be to engage GuideIT to tap into their Managed Services solutions that would assume IT leadership and provide subject matter experts. GuideIT would deliver a solution that encompasses infrastructure management, monitoring, end user support, clinical applications service desk, technical service desk, vendor management, call center technology support, and security services. This would provide Catalyst Health with the environment to deploy a new Electronic Medical Record platform which will enable greater access to clinical data for caregivers and offer improved responsiveness while improving the long-term health of their patients. Goals of the IT partnership would include:

  • Stabilization of the enterprise infrastructure through Change Management and Best Practice adoption
  • Implementation of IT roadmap and modernization that included a new EMR platform
  • Greater control of IT cost as a percentage of total revenue that would generate cost savings
  • Business stakeholders prioritize IT initiatives for greater focus on success that would drive greater business results

Why GuideIT

With GuideIT’s focus on healthcare expertise combined with its technology capabilities to manage a customers support requirements; a set of best practices and processes would be deployed to provide an improved result for Catalyst Health’s technology environment. GuideIT would operationalize a set of technology metrics to allow for greater transparency of performance, resiliency, and predictable results for the organization.

The best practice approach would create the foundation of operational excellence for Catalyst Health’s IT environment to achieve greater business results as well as on-time delivery and within budget. The underlying cost structure converted from a fixed to variable cost structure to support the scalability and allowed to realize a lower expense cost ratio as quality improved. Having access to critical skill sets that otherwise would be difficult to hire and retain would be of additional value to the organization.

The Implementation

GuideIT began with a consultative approach that included fully understanding the unique business model and support needs of Catalyst Health and its customers. Services were built around nine distinct areas: Infrastructure Management and Optimization, Service Desk, End User Field Support, Clinical Applications Support, Project Management, Vendor Management, Invoice Management, Security Enhancement, and Clinic Support.

1. Service Desk Management - Stakeholders identified the need to implement a more robust service desk that would aid in first call resolution for internal and external customers.
2. Infrastructure Management Transition - As the business grew, the need to support a larger, more diverse and scalable technology portfolio emerged. GuideIT assessed the environment and identified areas for immediate remediation. These included infrastructure standards procedures and performance management solutions were implemented to optimize the current exiting technology. As a part of this transition, GuideIT transitioned existing customer IT staff and filled identified gaps in skill sets with additional resources.
3. Expansion of Infrastructure Support - With continued growth and dependency on technology, Catalyst Health expanded the relationship to include 24x7 Service Desk, Clinical Applications Service Desk, and project management. This expanded scope allowed for greater end-to-end problem resolution.
4. ENHANCEMENTS TO SUPPORT TODAYS ENVIRONMENT - The events of the Pandemic in 2020 brought about new challenges and new solutions. In partnership with Catalyst Health, GuideIT responded with solutions for remote work, remote support, COVID 19 Hotline and most recently a Pharmacy Call Center.

The Results

  • Improved operational performance of IT systems with improved system availability
  • Seamless integration with the business departments to function as one-team
  • Improved IT solutions and responsiveness to the business
  • Improved efficiency cost ratios for the organization during a high growth period
  • Ability to support increased IT demand with a variable cost structure
Regional Health System to Accelerate Information Flow and Automate Back Office Processes through GuideIT

April 25, 2019 – Plano, TX – GuideIT today announced it signed a new contract to provide business intelligence solutions for a regional health system.

With the objectives of accelerating information flow and optimizing back-office processes, the health system launched an initiative to replace manual reporting that requires information from multiple sources, including its EMR.  GuideIT will integrate critical data sources into a common platform, apply business logic and develop the visualizations necessary to meet the health system’s management objectives.

“In healthcare, there is an opportunity to strengthen patient care and operating performance through greater and more timely access to information,” said Chuck Lyles, CEO for GuideIT. “Healthcare providers have more information about their patients and businesses than ever before.  At GuideIT, our healthcare and data specialists help healthcare providers leverage this information to produce tangible business accomplishments.”

GuideIT Digital Business solutions, which incorporate Digital Transformation, Business Intelligence and Digital Workplace, help organizations to operate more efficiently, convert ideas for creating new business value a reality, and facilitate a dynamic, anytime-anyplace business environment.

About GuideIT

GuideIT provides IT services that make technology contribute to business success. Through its consulting, managed IT, digital business, and cyber security solutions and the way it partners with customers, simplifies the complex, and inspires confidence, GuideIT utilizes technology in an industry context to enable the creation of business value and create an IT experience that delivers. Founded in 2013 and part of a heritage that dates to the industry’s founding, GuideIT has been recognized for its service quality, positive work environment and growth. More information is available at www.guideit.com.

Risk and Security Management Solutions Provider Modernizes Go-To-Market Application

A leading provider of risk and security management solutions needed to re-write and modernize its core go-to-market application.  GuideIT collaborated with the organization on defining its business requirements and developed the new application utilizing a hybrid agile/waterfall development method and continues to enhance the product leveraging agile sprint and release cycles.  The application, with its modern interface, and improved features and functionality, helped the customer expand their subscriber base by more than 95% in a 20-month period.

How to Protect Your Business From the Growing Complexity of Email-Based Security Attacks

The Threat Landscape

Organizations face a growing frequency and complexity of email-based security threats as the predominance of targeted attacks begin with an email. Advanced malware delivery, phishing and domain and identity spoofing can penetrate the primary layer of security provided as part of the email service and damage your business. With the increasing complexity of attacks, relying solely upon base security features and employee training is no longer adequate. Additionally, the types of organizations receiving these email attacks is expanding to include not only large and well-known businesses, but also small businesses because of a perception that there will be fewer security layers.

Our Approach

With GuideIT Advanced Email Protection you receive the extra security necessary to address this growing threat. We provide a service configurable to the level of protection you seek that is priced on a variable, per mailbox basis. Based on the requirements established, which encompass the level of protection, filter rules and user parameters, we implement and operate the advanced protection, while also providing you visibility into the threat environment and actions to protect your business.

How It Works

We implement a protective shield monitored by security experts in which all email traffic is routed through. Inbound messages are checked against know fraudulent and dangerous URL’s and email addresses, while attachments are scanned for malware. When an incoming email is flagged, it is blocked, quarantined and the GuideIT security team notified. We then work with your team to revise the protective rules as necessary for your business. All outbound messages are scanned to ensure that Personally Identifiable Information (PII) and Protected Health Information (PHI) do not leave the organization accidentally or maliciously.

Read next: How to Protect Your End User Devices from COVID-19 Phishing Attacks

How You Will Benefit

Through our Advanced Email Protection solution, you will realize:

  • Greater protection from advanced email threats
  • Increased visibility into the threats being experienced
  • Enhanced email encryption and data loss prevention
  • Extended protection to social media accounts
  • Better compliance and discovery readiness

Contact us to get started today.

Your Data: No Matter What You Do, It’s Your Most Valuable Asset…DATA MINING (1 of 2)

AUTHORED BY DONALD C. GILLETTE, PH.D., DATA CONSULTANT @ GUIDEIT

Last weekend I read a very interesting book entitled “The Quants: How a New Breed of Math Whizzes Conquered Wall Street and Nearly Destroyed It” by Scott Patterson. I highly recommend this as a must read for all of you that are doing Business Intelligence and especially Data Mining.

So what is Data Mining? Basically it is the practice of examining large databases in order to generate new information. Ok, let’s dig into that to understand some business value.

Let us consider the US Census. Of course by law, it is done every ten years and produces petabytes (1 petabyte is one quadrillion bytes of data), which are crammed full of facts that are important to almost anyone that is doing data mining for almost any consumer based product, service, etc. Quick sidebar and promo…in part 2 of this micro series, I will share where databases like the census and others can be accessed to help make your data mining exercise valuable.

So if I was asked by the marketing department to help them predict how much to spend on a new advertising campaign to sell a new health care product that enhances existing dental benefits of those already in qualified dental plans, I would have a need for data mining. With this criteria, I would, for example, query the average commute time of people over 16 in the state of Texas. It is 25 minutes. We would now have a cornerstone insight to work from. This of course narrows the age group to those receiving incomes and not on Social Security and Medicare. In an effort to validate a possible conclusion, we run a secondary query on additional demographic criteria and learn that a 25 minute commute volume count doesn’t change. Yet we learn that 35% of the people belong to one particular minority segment.

I pass this information to the Marketing Department and they now have the basis to understand how much they should pay for a statewide marketing campaign to promote their new product, when to run the campaign, and what channels and platforms to use.

DATA MINING, can’t live without it. Next week we’ll cover how and where to mine.

Servant Leadership…How to Build Trust in The Midst of Turmoil

AUTHORED BY Ron hill, Vice president, sales @ GUIDEIT

It was a sunny winter day and I had just started as the Client Executive at one of the largest accounts in the company. Little did I know, clouds were about to roll in. The CIO walked into my office and sat down with a big sigh. She communicated that they were ending our agreement and moving to a different service provider. We had 12 months. It required immediate action by our company, implications in the market would ensue, and an environment of uncertainty was born for our team of more than 700 people providing service support.

This was no time to defend or accept defeat. We had to act. Our account leadership team readied the organization for the work ahead and imminent loss. We formally announced the situation to the organization. There were tears and some were even distraught. Our leadership team had not faced this situation before. The next 12 months looked daunting.

Regardless, it was time to lead. We created a “save” strategy and stepped into action beginning with daily team meetings. We invested time prioritizing and sharing action items and implications about information systems, project management, and the business process services. It was our job to operate with excellence, despite the past. It was our job to honorably communicate knowledge to the incoming service provider. One of the outcomes of our work was a weekly email outlining past week accomplishments and expectations for the next week. The email often included a blend of personal stories and team success. We even came up with a catchy brand for the email…Truth of the Matter. It turned out to be a key vehicle that kept our teams bonded and informed. Our leadership team used it as a vehicle to help maintain trust with the team.

During our work, we also began to rebuild trust with the customer as we continued to support them in all phases of their operation. Because of our leadership team’s commitment to service, transparency, and integrity, the delivery team was inspired in achieving many great milestones during those 12 months. We were instrumental in helping our customer achieve multiple business awards including a US News and World Report top ranking. We also found ways to achieve goals that established new trends in their industry. Before we knew it, the year had come and gone and we were still there.

Reflecting back, since that dark day when the CIO informed me that we were done, it was actually the beginning of more than a decade-long relationship. The team had accomplished an improbable feat. In the end, it was the focus of our leadership to come together with a single message and act with transparency…letting their guard down to build an environment of trust with the team and with the customer. This enabled all of us to focus on meeting the goals of the customer, together.

Your Data: No Matter What You Do, It’s Your Most Valuable Asset (Part 2 of 2)

AUTHORED BY DONALD C. GILLETTE, PH.D., DATA CONSULTANT @ GUIDEIT

Last week we declared, “If you don’t embrace the fact that your business’ greatest asset is your data, not what you manufacture, sell or any other revenue-generating exercise, you will not exist in five years. That’s right…five years”.

This week, I’m introducing a perspective on leveraging Big Data to create tangible asset value. In the world of Big Data, structure is undefined and management tools vary greatly across both open source and proprietary…each requiring a set of skills unique from the world of relational or hierarchical data. To appreciate the sheer mass of the word “big”, we are talking about daily feeds of 45 terabytes a day from some social media sites. Some of the users of this data have nick names like “Quants” and they use tools called Hadoop, MapReduce, GridGain, HPCC and Storm. It’s a crazy scene out there!

Ok, so the world of big data is a crazy scene. How do we dig in and extract value from it?  In working with a customer recently, we set an objective to leverage Big Data to help launch a new consumer product. In the old days, we would assemble a survey team, form a focus group and make decisions based on a very small sample of opinions…hoping to launch the product with success. Today we access, analyze, and filter multiple data sources on people, geography, and buying patterns to understand the highest probability store locations for a successful launch. All these data sources exist in various electronic formats today and are available through delivery sources like Amazon Web Services (AWS) and others.

In our case, after processing one petabyte (1000 terabytes) of data we enabled the following business decisions…

  • Focused our target launch areas to five zip codes where families have an average age of children from two to four years old with a good saturation of grocery stores and an above average median income
  • Initiated a marketing campaign including social media centered on moms, TV media centered on cartoon shows
  • Offered product placement incentives for stores focusing on the right shelf placement for moms and children.

While moms are the buyers, children are influencers when in the store. In this case, for this product, lower shelves showed a higher purchasing probability because of visibility for children to make the connection to the advertising and “help” mom make the decision to buy.

Conclusion? The dataset is now archived as a case study and the team is repeating this exercise in other regional geographic areas. Sales can now be compared between areas enabling more prudent and valuable business decisions. Leveraging Big Data delivered asset value by increasing profitability, not based on the product but rather on the use of data about the product. What stories can you share about leveraging Big Data? Post them or ask questions in the comments section.

Your Data: No Matter What You Do, It’s Your Most Valuable Asset (Part 1)

Authored by Donald C. Gillette, Ph.D., Data Consultant @ GuideIT

If you don’t embrace the fact that your business’ greatest asset is your data, not what you manufacture, sell or any other revenue-generating exercise, you will not exist in five years.  That’s right…five years.

Not so sure that’s true? Ask entertainment giant Caesars Entertainment Corp. their perspective. They recently filed Chapter 11 and have learned that their data is what creditors value. (Wall Street Journal, March 19, 2015, Prize in Caesars Fight: Data on Players. Customer loyalty program is valued at $1 billion by creditors). The data intelligence of their customers is worth more than any of their other assets including Real Estate.

Before working to prove this seemingly bold statement, let’s take a look back to capture some much needed perspective about data.

The Mainframe

Space and resources were expensive and systems were designed and implemented by professionals who had a good knowledge of the enterprise and its needs.  Additionally, very structured process(s) existed to develop systems and information. All this investment and structure was often considered a bottleneck and an impediment to progress.  Critical information such as a customer file, or purchasing history, was stored in a single, protected location. Mainframe Business Intelligence offerings were report-writing tools like the Mark IV. Programmers and some business users were able to pull basic reports.  However, very little data delivered intelligence like customer buying habits.

Enter the Spreadsheet

With the introduction of the PC, Lotus 123 soon arrived in the market.  We finally had a tool that could represent data in a two dimensional (2D) format enabling the connection of valuable data to static business information. Some actionable information was developed resulting in better business decisions. This opened up a whole new world to what we now call business intelligence. Yet, connecting the right data points was a cumbersome, manual process. Windows entered the scene and with it, the market shifted from Lotus to Excel carrying over similar functionality and challenges.

Client Server World Emerges

As client servers emerged in the marketplace, data was much more accessible. It was also easier to connect together, relative to the past, providing stakeholders real business intelligence and its value to the enterprise. With tools like Cognos, Teradata, and Netezza in play, data moved from 2D to 3D presentation. Microsoft also entered the marketplace with SQL Server. All this change actually flipped the challenges of the Mainframe era.  Instead of bottlenecked data that’s hard to retrieve, version creep had entered the fold…multiple versions of similar information in multiple locations. What’s the source of truth?

Tune in next week as we provide support for data being your most valuable asset with a perspective and case study analysis of a Business Intelligence model that uses all technology platforms and delivers the results to your smartphone.

Reduce IT Spending… Approach Rationalization The Right Way

AUTHORED BY Frank T. avignone, IV Transformation executive @ GUIDEIT

Meaningful Use, Health Information Exchange, and Predictive Analytics are a few phrases that keep hospital CFOs awake at night. As the hospital market prepares for another shift in reimbursement, including a 1.3% cut in reimbursement for 2015 Medicare and an additional 75% cut in DSH payments by 2019, the health system CFO has innumerable financial challenges in maintaining a healthy balance sheet. Add to these concerns the looming ICD-10 transition expense, consolidation (including the aggregation of physicians and post acute care providers), and the future is daunting for the chief financial officer and other executive stakeholders.

There is a bright spot for the health system CFO with respect to bringing sanity to the healthcare IT spend on the balance sheet. It just requires a little courage. The majority of US health systems maintain an IT portfolio that supports redundant functions across the enterprise. In a consolidation environment where M&A activity is increasing, this can result in $70,000-100,000 per bed to integrate disparate clinical and business systems. A simple effort of technology portfolio rationalization can reduce IT spend in any environment as much as 60% capex and 30% opex. The effectiveness of application portfolio rationalization and the impact on the health system, in terms of cost savings, revenue generation, and meeting the needs of clinical business users, depends on the right approach.

While traditional application rationalization projects will yield positive, quantifiable results, typically they do not take into account “Information Rationalization” that will negatively impact value and time to care delivery. The most important aspect of an application portfolio is not the application itself, but rather the information trapped within the application stack. Changing perspective will increase the value of any rationalization effort. Releasing the information contained within legacy applications is the critical focus. Organizations can accomplish this by leveraging an enterprise service bus to overlay the information rich interface engine architecture leveraging existing information without tired approach of "rip and replace" usually offered by software and IT vendors. Once information is captured within the enterprise bus, it can be analyzed and consolidated into events and used as real-time streaming information to better understand the real value of the data and it’s origins. While rationalization capex/opex cost reductions are the underlying principals of the APR effort, the health system CFO and CIO can work together to create additional value. Simply by releasing the information, and in some cases virtualizing their associated application’s logic, the health care enterprise can preserve the value and improve access to the information trapped within. This approach will allow for rationalization discovered by traditional disciplines and provide a single uniform source of information and infrastructure to rapidly enable new business solutions dynamically and rapidly.

The time has come for the health system CFO and CIO to work hand in hand to accurately understand and align business needs with an agile information technology stack that promotes boundary-less access to information independent of the siloes of applications, securely and dependably.

Service Desk Selection: 3 Checkpoints

AUTHORED BY SCOTT TEEL, MANAGED SERVICES EXECUTIVE @ GUIDEIT

Today’s Service Desk continues to evolve with the technology that it supports for the individual end user community. Granted, it begins with a single seat and phone. From phone calls to email, to self-service customer web portals, chat, and social media…the ways in which we engage help has changed and scaled dramatically.

All sources of customer engagement must be tracked and reported in a single ticketing system to ensure quality of service through measurable analysis of performance. And a strong value proposition is a must. As you or someone in your organization considers that value proposition, here are 3 checkpoints for selecting a Service Desk solution:

  1. Partnership. Service Desk capabilities are often labeled a commodity offering due offshore capabilities. All providers of these services are battling and driving for the lowest cost solution without ‘listening and understanding’ individual customer requirements. If treated like a commodity, in most cases, the service becomes a bad investment. The right partner will assist in offering the right solution by listening and understanding the demands and risks of your needs. Then they can apply the right automation, tools and utilities to make service flourish and mitigate risk.
  2. Pricing. Yes there are many variables to drive costs up or down a Service Desk offering…from onshore to offshore, languages, first call resolution, ticketing, tool types, reporting, IT, application support, and so on. Regardless, service providers want to fill their excess capacity. Invest the time to understand their situation. Asking the right questions about their capabilities and willingness to be flexible (and ability to execute within such flexibility using a defined methodology), you can find great value through negotiating the right balance of service and pricing.
  3. Metrics. You must ensure that your partner has the available tools to establish a baseline for delivery for this service, while following ITIL processes that enable Continual Service Improvements (CSI) throughout the relationship. The right tools include the availability and performance of the PBX / ACD system, the ticketing system and any additional automated processes to show the CSI. The right reporting is available weekly, monthly and must be meaningfully measurable.

In summary, evaluate your options and ask a lot of questions about their situation. You will develop the leverage you need to achieve the right service with maximum value. Approach your evaluation this way and you will increase the probability of partnering with a group that serves as an extension of your team.

Balancing Creativity and Efficiency in IT Service Management (ITSM) Environments: 3 Best Practices

AUTHORED BY Scott teel, Managed services EXECUTIVE @ GUIDEIT

Although many IT service managers enjoy the thrill of a good chase (identifying the problem, developing possible solutions, and then testing those theories) leveraging such creativity of those outside-the-box thinkers can be a challenge.  Most engineers and administrators base their problem solving on their own set of experiences and training.  While this is part of the reason you hire them, it can sometimes limit their problem solving efficiency and overall performance when measured against the objectives of the business.  While some IT problems may be easily identified and solved, others require a much more “detective-like” approach and more creativity. So how does a leader balance creativity and efficiency in an ITSM environment?

Here are 3 best practices to ensure problem solving remains streamlined while still fostering creativity…

  1. Collaborate.  Infrastructure problems are complex and can span a multitude of functional areas. One size does not fit all solutions is not the normal in IT today and most solutions today can coexist or integrate into the foundation of your ITSM solution. So, foster a proactive organized collaboration environment to enable open sharing across domain expertise.
  2. Speak the same language and keep it simple.  Problems should be solved with a balance of tactical and strategic insight. Ensure the final solution is solved by taking small, easy to understand steps and milestones that achieve the overall business goals with measured results. Make sure your IT specialists are on the same page by providing a clear understanding of the problem, possible causes, and possible outcomes.
  3. Bring in help if needed.  Sometimes the right answer will come from outside your group. Don’t be afraid to consider this option.

Creativity can be balanced with efficiency by fostering an environment where ideas and solutions can be freely shared with an organized and collaborative approach. Join us next week for our next microblog post!

Fedora 20: Firefox Reports Flash as Vulnerable

This problem starts with Firefox reporting that your flash-plugin is out of date.  This report looks like this and disables all-flash.

After this, we will take a look Mozilla’s Plugin Check to see what it thinks is going on.

Now here we can see that version 11.2.202.440 is vulnerable.  We will then check about:plugins to see if it agrees.

Again this is also reporting 11.2.202.440, so there must be a problem, but it also tells us that there is an update available.  Now I run regular yum updates on this machine, and I actually noticed flash-plugin was updated just a few hours prior to seeing this alert.  So lets check the installed version.

[root@ltmmattoon matthew]# yum info flash-plugin<br />

Loaded plugins: langpacks, refresh-packagekit<br />

Installed Packages<br />

Name : flash-plugin<br />

Arch : x86_64<br />

Version : 11.2.202.442<br />

Release : release<br />

Size : 19 M<br />

Repo : installed<br />

From repo : adobe-flashplayer<br />

Summary : Adobe Flash Player 11.2<br />

URL : http://www.adobe.com/downloads/<br />

License : Commercial<br />

Description : Adobe Flash Plugin 11.2.202.442<br />

: Fully Supported: Mozilla SeaMonkey 1.0+, Firefox 1.5+, Mozilla<br />

: 1.7.13+

Interesting 11.2.202.442, which is higher than what Firefox is reporting.  Of course Firefox has been rebooted, but lets do it again just to make sure.

Now to fix it.

$ pwd<br />

/home/matthew/.mozilla/firefox/cls7wbvm.default<br />

$ mv pluginreg.dat pluginreg.dat.bak

Restart Firefox and it will collect new data on all of its plugins, and about:plugins will start reporting the correct version.

IT Project Management…Which Stakeholder Are You?

Authored by Guy Wolf, transformation executive @ guideit

So much material has already been developed and published about what a PMO is, what it can be, and how to set one up.  Much of the material is banal. For those of you who are fans of Monty Python, the “How to do it” skit comes to mind. This particular post focuses on something else: a perspective on stakeholder roles and the importance of clear objectives.

Often PMOs get started for the wrong reasons, putting a solution in place before fully understanding the primary objective. Some promote focusing on achieving a level of maturity first. Others propose starting at the project level, and as you demonstrate proficiency, moving “up” to the program, then portfolio level.  The problem with these approaches is that the “what” is confused for the “how.”

The best practice for an effective PMO is to develop a list of business objectives and customers that will be served with a business case that illustrates why implementing a PMO is better than the alternatives. The PMO, however one defines it, is not a project.  It is a business unit.  Therefore, just like Human Resources, Marketing, or Facilities, it must justify its existence by improving the lives of its customers.  What that means in your situation, and how to go about it, will be different from others. Below are some perspectives by role.

Customer/CIO:  Nearly all business improvement initiatives have a large component of Information Technology (IT) at their core. Frequently, IT is the single largest component, and implementation is often on the critical path to achieving the desired end state. Additionally, IT departments often suffer from a practice of project management that excludes all other departments in an enterprise.  This disconnect can create a misalignment in critical path objectives. Unfortunately the CIO too often holds the bag at the end if the broader strategy and governance are not easily accessible. What the CIO needs is clear governance or a seat at the strategy table to manage a complex, inter-related portfolio of initiatives that will deliver success to the company.

CFO: CFO’s have an expectation to forecast and manage capital and operating expenses.  As enterprise business-change initiatives often carry high risk, a CFO has a strong desire to assure that processes are in place to alert leadership in advance of potential variances and manage expenses to the forecasted budget, even if it was set long before the project requirements were fully known.

CEO: charged with the overall success of the organization, the CEO must manage many competing priorities among multiple departments. Managing a global perspective includes oversight of limited availability in capital investment resources spread across multiple strategic priorities.  To that end, CEO’s require some method to weigh the various investment options and to select the combination that has the highest chance of achieving the overall organizational objectives.

Business Unit Leaders (Sponsors):  charged with growing and improving their areas of responsibility. They have a need for a well-defined process to engage IT resources in helping them prioritize projects and source them with the right resources. Furthermore, they need visibility to relevant status reporting with opportunity to make business decisions to navigate a successful result.

Steering Committee: responsible for weighing the costs, risks and benefits of multiple project options, often without certainty of the inputs.  They require a method that provides as much information as possible regarding objectives, resources, and stakeholders.  For projects underway, visibility to insights through reporting enables better decision-making throughout the process.

Project Managers: need support for collecting status data enabling focus on day-day decision making and management, not task-driven administration; access to resources across multiple matrixed towers in the organization; access to key stakeholders to make decisions and allow them to keep projects on track.

Team members: require easy data collection that helps reporting status and doesn't take a lot of time to use; respect for a balance of time to support operations as well as project demands from multiple project manager stakeholders.

Choosing objectives means limiting some, and eliminating others. Prioritization isn’t easy but it’s necessary to increase the probability of extending the long-term value of your projects. There are some great templates that can be used in building and operating a PMO to improve the quality and speed with which we achieve our goals. If you would like more information, drop a comment or email me at guy.wolf@guideit.com. I welcome your feedback, as we strive to do technology right, and do projects right.

BlackBerry Z30: No Update to 10.3.1

I have a BlackBerry Z30 (STA100-5) which I was excited to update to the latest release of BB10, which was announced on February 9, 2015 (link).  However, when I was attempting to install the update over the air, it kept telling me that I was already on the latest version.  This was obviously incorrect (I was on 10.2.1.3062 which is the latest prior to 10.3.1).

Here are the things that I tried that were unsuccessful.

  • Reboots (including power off).
  • Removing the SIM and using wifi only.
  • Waiting.

Now eventually I was able to get the update installed on the advice of a friend who already had the update.

  1. Turn of Mobile Network.
  2. Power Off.
  3. Remove SIM.
  4. Power On.
  5. Check for update.

Now at this point, I had something much different, it took significantly longer to check for the updates, which of course got me excited thinking it must actually be doing something.  Twenty minutes later I realized I must have been wrong, and killed the Settings app. Then checked for the update again, and it immediately found it and I was able to start the install.  Once I had the update and it was in the process of downloading I re-inserted my SIM and enabled mobile networking.

Obviously there is the possibility for streamlining this procedure (do you actually need to disable mobile networking and remove the SIM being the most obvious one), but since I didn’t have a box full of these devices with this problem I was unable to optimize the procedure, so feel free to tinker, but if nothing seems to work then feel free to give the above a go and see if you have the same experience.

Also important to note, I purchased my BlackBerry directly from the BlackBerry Store, if you purchased it from a carrier then you might have different mileage based on their approvals.

Physicians, Clinicians: Thank You

Authored by Mark Johnson, VP Managed Services @ GuideIT

For anyone who has spent the bulk of their career in healthcare IT, a venture into an in/out-patient setting for one’s own health is always an interesting experience.  Throughout the process you can’t help but say – “it’s 2015 and we’re still doing this?”  For me it was in preparation for that first (dreaded) “over 50 procedure”.  It started with far too much paperwork, some of it redundant, and some of it collecting information I had already provided in their portal (sadly with no linkage to my HealthVault account).  Then I arrived in the clinic and was not only faced with more paperwork, but music that was playing way too loud on a morning that I was already grumpy from not being able to eat the day prior.

But then, everything changed.  Once I left the waiting room, every clinician I interacted with was simply outstanding.  From the prep nurse, to the anesthesiologist, to the doctor himself.  They actually seemed to really and truly enjoy their work!  And their positive approach to delivery of care translated directly to an extremely positive patient-clinician interaction.

So while there’s plenty of time to talk about how to better leverage IT in the delivery of care, for me today this is simply a “hat’s off and well done” to the people that really make such a tremendous difference in our lives – clinicians and their staff.
Oh, and if you’re wondering – it turns out it was a very good thing I had this taken care of.  So listen to your physician.

MutliSourcing…The Right IT Governance for Maximizing Business Outcomes

Authored by Jeff Smith, VP Business Development @ GuideIT

A national healthcare provider was ready to move from multiple PBX systems to a VOIP-centric model for their communications…the transition, one piece of a broader multi-source IT strategy. Simple enough, right? Not exactly. This transition was a monster…500 locations and more than 1100 buildings. Additionally, the provider cares for patients, the majority of whom are in some form of acute need. Sure, any business requires clean execution in a project of this magnitude. But few businesses have the sole mission of caring for the acute health needs of their customers like healthcare providers do for their patients.

Truly lots of moving parts in this story…a story representing one part of the bigger picture. A critical attribute of this provider’s success was ensuring the right IT Governance function encompassing their multi-source strategy.

So what is the right governance? According to Gartner, governance is the decision framework and process by which enterprises make investment decisions and drive business value. Take that one step further applying IT and the definition is, “IT Governance (ITG) is the processes that ensure the effective and efficient use of IT in enabling an organization to achieve its goals. IT demand governance (ITDG—what IT should work on) is the process by which organizations ensure the effective evaluation, selection, prioritization, and funding of competing IT investments; oversee their implementation; and extract business benefits.”

Now consider “why” the right IT Governance is critical in a multi-sourcing environment. When multiple vendor partners serve in support of the broader business mission, the opportunity to optimize outcomes for the business is huge. And so is the risk. The opportunity is there because the organization can leverage the specialization of subject matter experts necessary in a highly complex IT environment driven by growing business demands. One partner specializes in apps, another in cloud infrastructure, another in mobility, and so on. They all bring optimal value in areas critical to support the business…thus the core value of multi-sourcing.

Therein lies the risk too. Without the right governance model, no clear accountability exists to ensure open collaboration and visibility across specialists. Specialists will act in silos. And we all know how silos hurt business. Simply put, the “why” for the right governance is to optimize outcomes through maximizing specialization while minimizing the risk of “silo-creep”. The right governance closes the gap between what IT departments think the business requires and what the business thinks the IT department is able to deliver. Organizations need to have a better understanding of the value delivered by IT and the multiple vendor partners leveraged…some of whom are ushered in through business stakeholders.

Because organizations are relying more and more on new technology, executive leadership must be more aware of critical IT risks and how they are being managed. Take for example our communications transition story from earlier…if there is a lack of clarity and transparency when making such a significant IT decision, the transition project may stall or fail, thereby putting the business at risk and, in this case, patients lives at risk. That has a crippling impact on the broader business and future considerations for the right new technologies to be leveraged.

Conclusion: the right IT Governance is critical to optimizing business outcomes

Perot Back in IT Services

MAKES MAJOR INVESTMENT IN GUIDEIT

Plano, TX – Monday, February 2, 2015 – GuideIT, a Plano-based provider of technology optimization services, today announced that the Perot family has increased their investment in the company to become its largest shareholder. GuideIT, newly branded as A Perot Company, welcomes Ross Perot, Jr. as a member of the board.

Corporate portrait session with Ross Perot, Ross Perot Jr, and the founders/executive from GuideIT; taken in the front foyer of Ross Perot Sr's office in Plano Texas

Back Row: Chuck Lyles, CEO  |  John Furniss, Vice President  |  Scott Barnes, Board Member  |  Tim Morris, Vice President  |  John Lyon, CFO

Front Row: Ross Perot, Jr., Board Member  |  H Ross Perot  |  Russell Freeman, Board Member

“Through EDS and Perot Systems, my family has played a major role in shaping the IT services industry,” said Perot, Jr. “GuideIT has fostered a great entrepreneurial spirit and a strong commitment in delivering customer results in a rapidly growing organization. I look forward to building a great company.”

GuideIT has a suite of solutions and an engagement approach tailored for today’s business environment and technology issues.  The company’s revenue more than tripled in 2014.

“We are building a next-generation services company based on timeless services industry principles,” said Chuck Lyles, CEO.  “We are honored to be associated with the Perot family who are known for their commitment to excellent customer service, outstanding business management and the highest ethical standards.”

GuideIT offers services that help customers optimize their technology environments. Primary offerings include consultative services such as technology vendor management, project management, enterprise assessments, and a suite of deployment and managed services. By deploying these solutions in a collaborative, flexible engagement approach, customers achieve tangible business results.

About GuideIT

As a provider of technology optimization services, we believe doing technology right is the difference between leaders and the rest. We help companies lead.
Through a collaborative and easy-to-do-business-with approach, the company helps customers align IT operations in meeting their strategic business needs, better govern and manage the cost of IT, and effectively navigate change in technology.

Media Contact

James Fuller
Public Strategies, Inc.
214-613-0028
jfuller@pstrategies.com

MultiSourcing…A Critical Strategy for Aligning IT with the Business Mission

Authored by Chuck Lyles, CEO @ GuideIT

A growing trend in IT Services is the implementation of strategies designed to migrate IT operations from a single provider to an environment leveraging multiple specialty companies. As the market matures, this trend can better enable CIO's in executing strategically, driving greater effectiveness and efficiency in operations.

So what are the high level benefits and outcomes of multi-sourcing?

The right multi-sourcing strategy allows IT teams to dilute risk with partners who specialize in a particular discipline or technology.  Additionally, this type of strategy facilitates greater flexibility enabling the internal agility necessary for adapting to changing priorities…a consistent theme in supporting the broader business mission. Specialized firms are more responsive to customer needs, more motivated to consistently drive innovation, and better at implementing disruptive technologies that drive effectiveness through more automation.

What are some of the challenges and potential pitfalls?

Accountability. Yes multi-sourcing is a critical approach for leveraging IT in supporting the needs of the business. Yet to be truly strategic in this approach, leaders must require accountability. Fail to create an environment of accountability in execution, and the strategy isn’t worth the paper it’s written on. Another challenge…Simplicity. A “multi” approach by definition, yet absent of sound strategy, has the potential to introduce complexity and silos into your environment. So what’ the answer for ensuring accountability and simplicity in your multi-sourcing approach? Clear purpose, aligned incentives, and shared values. Easy to say; tough to do. More on this in future posts.

What’s your perspective on multi-source strategies?

SPARC Logical Domains: Alternate Service Domains Part 3

In Part One of this series, we went through the initial configuration of our Logical Domain hypervisor and took some time to explain the process of mapping out the PCI Root Complexes, so that we would be able to effectively split them between the primary and an alternate domain.

In Part Two of this series we took the information from Part One and split out our PCI Root Complexes and we configured and installed an alternate domain.  We were also able to reboot the primary domain without impacting the operation of the alternate domain.

In Part Three (this article) we will be creating redundant virtual services as well as some guests that will use the redundant services that we created, and will go through some testing to see the capabilities of this architecture.  At the end of this article, we will be able to reboot either the primary or alternate domain without it having an impact on any of the running guests.

Create Redundant Virtual Services

So at this point, we have a fully independent I/O Domain named alternate.  This is great for some use cases, however, if we don’t enable it to be a Service Domain as well then we won’t be able to extend that independence to our Guest Domains.  This will require that we create Virtual Services for each of these critical components of a domain.

We previously created a primary-vds0, and that will suit us just fine, however, we will also need an alternate-vds0.

# ldm add-vdiskserver primary-vds0 primary<br />

# ldm add-vdiskserver alternate-vds0 alternate

We did not provision any Virtual Switches previously as we had no need of it since we had handed out physical NICs directly to primary and alternate.  Here we will create both primary-vsw0 and alternate-vsw0.

# ldm add-vswitch net-dev=net0 primary-vsw0 primary<br />

# ldm add-vswitch net-dev=net0 alternate-vsw0 alternate

To connect to the console of LDOMs we must have a virtual console concentrator.  This should have been setup previously to install the alternate domain.

# ldm add-vconscon port-range=5000-5100 primary-vcc0 primary

Now let’s save our setting since we have setup the services.

# ldm add-config redundant-virt-services

With our progress saved we can move on.

Creating Multipath Storage Devices

In order to utilize the redundancy of LDM, we will need to create redundant virtual disk devices.  The key difference here is that we will need to specify a mpgroup.

# ldm add-vdsdev mpgroup=san01-fc primary-backend ldm1-disk0@primary-vds0

And now the same device, using the alternate domain.

# ldm add-vdsdev mpgroup=san01-fc alternate-backend ldm1-disk0@alternate-vds0

Now another thing to notice, is that when using multiple protocols on the same SAN it is important to have a different mpgroup, this is because you can have failures in the interconnect layers, that don’t affect other protocols.  Case in point a failure of the FC fabric wouldn’t affect the availability of NFS services. So those failures need to be monitored separately. The jury is still out where the line should be drawn in terms of what goes into a single mpgroup.  As I was testing live migration it seems to be more effective to use the VM and the protocol as the boundary, as it checks the mpgroup for a number of members on both sides as part of its check. So, in this case, it might be ldm1-fc and ldm1-nfs.

# ldm add-vdsdev mpgroup=san01-nfs primary-backend ldm1-disk1@primary-vds0

Again the same device for the alternate domain.

# ldm add-vdsdev mpgroup=san01-nfs alternate-backend ldm1-disk1@alternate-vds0

Now we are ready to support the domain, next, we will create the domain and assign the disk resources.  Important to note, that we do not assign BOTH disk resources, only the primary. The mpgroup will take care of the redundancy.

# ldm add-domain ldm1<br />

# ldm set-vcpu 16 ldm1<br />

# ldm set-memory 16G ldm1<br />

# ldm add-vdisk disk0 ldm1-disk0@primary-vds0 ldm1

In the next section we will create some redundant network interfaces.

Creating Redundant Guest Networking

Redundant networking is really not any different than non-redundant networking, we simply create two VNICs, one on  primary-vsw0 and the other on alternate-vsw0. Once provisioned we create an IPMP interface inside of the guest. I theory you could use DLMP as well, though I haven’t tested this option.

# ldm add-vnet vnet0 primary-vsw0 ldm1<br />

# ldm add-vnet vnet1 alternate-vsw0 ldm1

Inside of the guest we now need to bind, start, and install.

# ldm bind ldm1<br />

# ldm start ldm1

I am assuming that you know how to install Solaris, as you already would have done so at least twice to get to this point.  Now time to configure networking. If you need help with configuring networking see the following articles.

Solaris 11: Network Configuration Basics

Solaris 11: Network Configuration Advanced

ldm1# ipadm create-ip net0<br />

ldm1# ipadm create-ip net1<br />

ldm1# ipadm create-ipmp -i net0 -i net1 ipmp0<br />

ldm1# ipadm create-addr -T static -a 192.168.1.11/24 ipmp0/v4<br />

ldm1# route -p add default 192.168.1.1

Now at this point, you will have all the pieces in place to have redundant guests.  Now it is time to do some rolling reboots of the primary and alternate domains and ensure your VM stays up and running.  Inside the guest, the only thing that is amiss is you will see ipmp members go into a failed state, and then come back up as the services are restored.

One final note.  From the ilom if you issue a -> stop SYS this will shutdown the physical hardware, which is both domains and all guests.

SPARC Logical Domains: Alternate Service Domains Part 2

In Part One of this series, we went through the initial configuration of our Logical Domain hypervisor and took some time to explain the process of mapping out the PCI Root Complexes, so that we would be able to effectively split them between the primary and an alternate domain.

In Part Two (this article) we are going to take that information and split out our PCI Root Complexes and configure and install an alternate domain.  At the end of this article, you will be able to reboot the primary domain without impacting the operation of the alternate domain.

In Part Three we will be creating redundant virtual services as well as some guests that will use the redundant services that we created, and will go through some testing to see the capabilities of this architecture.

Remove PCI Roots From Primary

The changes that we need to make will require that we put LDM into dynamic reconfiguration mode, which will require a reboot to implement the changes.  This mode also prevents further changes to other domains.

# ldm start-reconf primary<br />

Initiating a delayed reconfiguration operation on the primary domain.<br />

All configuration changes for other domains are disabled until the primary<br />

domain reboots, at which time the new configuration for the primary domain<br />

will also take effect.

Now we remove the unneeded PCI Roots from the primary domain, this will allow us to assign them to the alternate domain.

# ldm remove-io pci_1 primary<br />

------------------------------------------------------------------------------<br />

Notice: The primary domain is in the process of a delayed reconfiguration.<br />

Any changes made to the primary domain will only take effect after it reboots.<br />

------------------------------------------------------------------------------<br />

# ldm remove-io pci_3 primary<br />

------------------------------------------------------------------------------<br />

Notice: The primary domain is in the process of a delayed reconfiguration.<br />

Any changes made to the primary domain will only take effect after it reboots.<br />

------------------------------------------------------------------------------

Lets save our configuration.

# ldm add-config reduced-io

Now a reboot to make the configuration active.

# reboot

When it comes back up we should see the PCI Roots unassigned.

Create Alternate Domain

Now we can create our alternate domain and assign it some resources.

# ldm add-domain alternate<br />

# ldm set-vcpu 16 alternate<br />

# ldm set-memory 16G alternate

We have set this with 2 cores and 16GB of RAM.  Your sizing will depend on your use case.

Add PCI Devices to Alternate Domain

We are assigning pci_1 and pci_3 to the alternate domain, this will have direct access to two of the on-board NICs, two of the disks, and half of the PCI slots.  It also will inherit the CDROM as well as the USB controller.

Also really quick I just wanted to point this out quickly.  The disks are not split evenly, pci_0 has 4 disks, while pci_3 only has two.  So that said if your configuration included 6 disks then I would recommend using the third and fourth in the primary as non-redundant storage pool, perhaps to be used to stage firmware and such for patching.  But the bottom line is that you need to purchase the hardware with 4 drives minimum.

# ldm add-io pci_1 alternate<br />

# ldm add-io pci_3 alternate

Here we have NICs and disks on our alternate domain, now we just need something to boot from and we can get the install going.

Lets save our config before moving on.

# ldm add-config alternate-domain

With the config saved we can move on to the next steps.

Install Alternate Domain

We should still have our CD in from the install of the primary domain.  After switching the PCI Root Complexes the CD drive will be presented to the alternate domain (as it is attached to pci_3).

First thing to do is bind our domain.

# ldm bind alternate

Then we need to start the domain.

# ldm start alternate

We need to do is determine what port telnet is listening on for this particular domain.  In our case we can see it is 5000.

# ldm ls<br />

NAME STATE FLAGS CONS VCPU MEMORY UTIL NORM UPTIME<br />

primary active -n-cv- UART 16 16G 0.2% 0.2% 17h 32m<br />

alternate active -n--v- 5000 16 16G 0.0% 0.0% 17h 45m

When using these various consoles you always need to be attentive to the escape sequence, which in the case of telnet it is ^], which is “CTRL” + “]” once we have determined where we can telnet to, then we can start the connection.  Also important to note. You will see ::1: Connection refused. This is because we are connecting to localhost, if you don’t want to see that error connect to 127.0.0.1 (which is the IPv4 local address).

# telnet localhost 5000<br />

Trying ::1...<br />

telnet: connect to address ::1: Connection refused<br />

Trying 127.0.0.1...<br />

Connected to AK00176306.<br />

Escape character is '^]'.</p>

<p>Connecting to console &quot;alternate&quot; in group &quot;alternate&quot; ....<br />

Press ~? for control options ..</p>

<p>telnet&gt; quit<br />

Connection to AK00176306 closed.

I will let you go through the install on your own, but I am assuming that you know how to install the OS itself.

Now let's save our config, so that we don’t lose our progress.

# ldm add-config alternate-domain-config

At this point, if we have done everything correctly, we can reboot the primary domain without disrupting service to the alternate domain.  Doing pings during a reboot will show illustrate where we are in the build. Of course, you would have to have networking configured on the alternate domain, and don’t forget the simple stuff like mirroring your rpool and such, it would be a pity to go to all this trouble to not have a basic level of redundancy such as mirrored disks.

Test Redundancy

At this point, the alternate and the primary domain are completely independent.  To validate this I recommend setting up a ping to both the primary and the alternate domain and rebooting the primary.  If done correctly then you will not lose any pings to the alternate domain. Keep in mind that while the primary is down you will not be able to utilize the “control domain” in other words the only one which can configure and start/stop other domains.

SPARC Logical Domains: Alternate Service Domains Part 1

In this series, we will be going over configuring alternate I/O and Service domains, with the goal of increasing the serviceability the SPARC T-Series servers without impacting other domains on the hypervisor.  Essentially enabling rolling maintenance without having to rely on live migration or downtime. It is important to note, that this is not a cure-all, for example, base firmware updates would still be interruptive, however minor firmware such as disk and I/O cards only should be able to be rolled.

In Part One we will go through the initial Logical Domain configuration, as well as mapping out the devices we have and if they will belong in the primary or the alternate domain.

In Part Two we will go through the process of creating the alternate domain and assigning the devices to it, thus making it independent of the primary domain.

In Part Three we will create redundant services to support our Logical Domains as well as create a test Logical Domain to utilize these services.

Initial Logical Domain Configuration

I am going to assume that your configuration is currently at the factory default and that you like me are using Solaris 11.2 on the hypervisor.

# ldm ls<br />

NAME STATE FLAGS CONS VCPU MEMORY UTIL NORM UPTIME<br />

primary active -n-cv- UART 256 511G 0.4% 0.3% 6h 24m

The first thing we need to do is remove some of the resources from the primary domain, so that we are able to assign them to other domains.  Since the primary domain is currently active and using these resources we will enable delayed reconfiguration mode, this will accept all changes, and then on a reboot of that domain (in this case primary which is the control domain – or the physical machine) it will enable the configuration.

# ldm start-reconf primary<br />

Initiating a delayed reconfiguration operation on the primary domain.<br />

All configuration changes for other domains are disabled until the primary<br />

domain reboots, at which time the new configuration for the primary domain<br />

will also take effect.

Now we can start reclaiming some of those resources, I will assign 2 cores to the primary domain and 16GB of RAM.

# ldm set-vcpu 16 primary<br />

------------------------------------------------------------------------------<br />

Notice: The primary domain is in the process of a delayed reconfiguration.<br />

Any changes made to the primary domain will only take effect after it reboots.<br />

------------------------------------------------------------------------------<br />

ldm set-memory 16G primary<br />

------------------------------------------------------------------------------<br />

Notice: The primary domain is in the process of a delayed reconfiguration.<br />

Any changes made to the primary domain will only take effect after it reboots.<br />

------------------------------------------------------------------------------

I like to add configurations often when we are doing a lot of changes.

# ldm add-config reduced-resources

Next we will need some services to allow us to provision disks to domains and to connect to the console of domains for the purposes of installation or administration.

# ldm add-vdiskserver primary-vds0 primary<br />

------------------------------------------------------------------------------<br />

Notice: The primary domain is in the process of a delayed reconfiguration.<br />

Any changes made to the primary domain will only take effect after it reboots.<br />

------------------------------------------------------------------------------<br />

# ldm add-vconscon port-range=5000-5100 primary-vcc0 primary<br />

------------------------------------------------------------------------------<br />

Notice: The primary domain is in the process of a delayed reconfiguration.<br />

Any changes made to the primary domain will only take effect after it reboots.<br />

------------------------------------------------------------------------------

Let's add another configuration to bookmark our progress.

# ldm add-config initial-services

We need to enable the Virtual Network Terminal Server service, this allows us to telnet from the control domain into the other domains.

# svcadm enable vntsd

Finally a reboot will put everything into action.

# reboot

When the system comes back up we should see a drastically different LDM configuration.

Identify PCI Root Complexes

All the T5-2’s that I have looked at have been laid out the same, with the SAS HBA and onboard NIC on pci_0 and pci_2, and the PCI Slots on pci_1 and pci_3.  So to split everything evenly pci_0 and pci_1 stay with the primary, while pci_2 and pci_3 go to the alternate. However so that you understand how we know this I will walk you through identifying the complex as well as the discreet types of devices.

# ldm ls -l -o physio primary</p>

<p>NAME<br />

primary</p>

<p>IO<br />

DEVICE PSEUDONYM OPTIONS<br />

pci@340 pci_1<br />

pci@300 pci_0<br />

pci@3c0 pci_3<br />

pci@380 pci_2<br />

pci@340/pci@1/pci@0/pci@4 /SYS/MB/PCIE5<br />

pci@340/pci@1/pci@0/pci@5 /SYS/MB/PCIE6<br />

pci@340/pci@1/pci@0/pci@6 /SYS/MB/PCIE7<br />

pci@300/pci@1/pci@0/pci@4 /SYS/MB/PCIE1<br />

pci@300/pci@1/pci@0/pci@2 /SYS/MB/SASHBA0<br />

pci@300/pci@1/pci@0/pci@1 /SYS/MB/NET0<br />

pci@3c0/pci@1/pci@0/pci@7 /SYS/MB/PCIE8<br />

pci@3c0/pci@1/pci@0/pci@2 /SYS/MB/SASHBA1<br />

pci@3c0/pci@1/pci@0/pci@1 /SYS/MB/NET2<br />

pci@380/pci@1/pci@0/pci@5 /SYS/MB/PCIE2<br />

pci@380/pci@1/pci@0/pci@6 /SYS/MB/PCIE3<br />

pci@380/pci@1/pci@0/pci@7 /SYS/MB/PCIE4

This shows us that pci@300 = pci_0, pci@340 = pci_1, pci@380 = pci_2, and pci@3c0 = pci_3.

Map Local Disk Devices To PCI Root

First we need to determine which disk devices are in the zpool, so that we know which ones that cannot be removed.

# zpool status rpool<br />

pool: rpool<br />

state: ONLINE<br />

scan: resilvered 70.3G in 0h8m with 0 errors on Fri Feb 21 05:56:34 2014<br />

config:</p>

<p>NAME STATE READ WRITE CKSUM<br />

rpool ONLINE 0 0 0<br />

mirror-0 ONLINE 0 0 0<br />

c0t5000CCA04385ED60d0 ONLINE 0 0 0<br />

c0t5000CCA0438568F0d0 ONLINE 0 0 0</p>

<p>errors: No known data errors

Next we must use mpathadm to find the Initiator Port Name.  To do that we must look at slice 0 of c0t5000CCA04385ED60d0.

# mpathadm show lu /dev/rdsk/c0t5000CCA04385ED60d0s0<br />

Logical Unit: /dev/rdsk/c0t5000CCA04385ED60d0s2<br />

mpath-support: libmpscsi_vhci.so<br />

Vendor: HITACHI<br />

Product: H109060SESUN600G<br />

Revision: A606<br />

Name Type: unknown type<br />

Name: 5000cca04385ed60<br />

Asymmetric: no<br />

Current Load Balance: round-robin<br />

Logical Unit Group ID: NA<br />

Auto Failback: on<br />

Auto Probing: NA</p>

<p>Paths:<br />

Initiator Port Name: w5080020001940698<br />

Target Port Name: w5000cca04385ed61<br />

Override Path: NA<br />

Path State: OK<br />

Disabled: no</p>

<p>Target Ports:<br />

Name: w5000cca04385ed61<br />

Relative ID: 0

Our output shows us that the initiator port is w5080020001940698.

# mpathadm show initiator-port w5080020001940698<br />

Initiator Port: w5080020001940698<br />

Transport Type: unknown<br />

OS Device File: /devices/pci@300/pci@1/pci@0/pci@2/scsi@0/iport@1<br />

Initiator Port: w5080020001940698<br />

Transport Type: unknown<br />

OS Device File: /devices/pci@300/pci@1/pci@0/pci@2/scsi@0/iport@2<br />

Initiator Port: w5080020001940698<br />

Transport Type: unknown<br />

OS Device File: /devices/pci@300/pci@1/pci@0/pci@2/scsi@0/iport@8<br />

Initiator Port: w5080020001940698<br />

Transport Type: unknown<br />

OS Device File: /devices/pci@300/pci@1/pci@0/pci@2/scsi@0/iport@4

So we can see that this particular disk is on pci@300, which is pci_0.

Map Ethernet Cards To PCI Root

First we must determine the underlying device for each of our network interfaces.

# dladm show-phys net0<br />

LINK MEDIA STATE SPEED DUPLEX DEVICE<br />

net0 Ethernet up 10000 full ixgbe0

In this case ixgbe0, we can then look at the device tree to see where it is pointing to to find which PCI Root this device is connected to.

# ls -l /dev/ixgbe0<br />

lrwxrwxrwx 1 root root 53 Feb 12 2014 /dev/ixgbe0 -&gt; ../devices/pci@300/pci@1/pci@0/pci@1/network@0:ixgbe0

Now we can see that it is using pci@300, which translates into pci_0.

Map Infiniband Cards to PCI Root

Again let's determine the underlying device name of our infiniband interfaces, on my machine they were defaulted at net2 and net3, however, I had previously renamed the link to ib0 and ib1 for simplicity.  This procedure is very similar to Ethernet cards.

# dladm show-phys ib0<br />

LINK MEDIA STATE SPEED DUPLEX DEVICE<br />

ib0 Infiniband up 32000 unknown ibp0

In this case our device is ibp0.  So now we just check the device tree.

# ls -l /dev/ibp0<br />

lrwxrwxrwx 1 root root 83 Nov 26 07:17 /dev/ibp0 -&gt; ../devices/pci@380/pci@1/pci@0/pci@5/pciex15b3,673c@0/hermon@0/ibport@1,0,ipib:ibp0

We can see by the path, that this is using pci@380 which is pci_2.

Map Fibre Channel Cards to PCI Root

Now perhaps we need to have some Fibre Channel HBA’s split up as well, first thing we must do is look at the cards themselves.

# luxadm -e port<br />

/devices/pci@300/pci@1/pci@0/pci@4/SUNW,qlc@0/fp@0,0:devctl NOT CONNECTED<br />

/devices/pci@300/pci@1/pci@0/pci@4/SUNW,qlc@0,1/fp@0,0:devctl NOT CONNECTED

We can see here that these use pci@300 which is pci_0.

The Plan

Basically we are going to split our PCI devices by even and odd, with even staying with Primary and odd going with Alternate.  On the T5-2, this will result on the PCI-E cards on the left side being for the primary, and the cards on the right for the alternate.

Here is a diagram of how the physical devices are mapped to PCI Root Complexes.

FIGURE 1.1 – Oracle SPARC T5-2 Front View

FIGURE 1.2 – Oracle SPARC T5-2 Rear View

References

SPARC T5-2 I/O Root Complex Connections – https://docs.oracle.com/cd/E28853_01/html/E28854/pftsm.z40005601508415.html

SPARC T5-2 Front Panel Connections – https://docs.oracle.com/cd/E28853_01/html/E28854/pftsm.bbgcddce.html#scrolltoc

SPARC T5-2 Rear Panel Connections – https://docs.oracle.com/cd/E28853_01/html/E28854/pftsm.bbgdeaei.html#scrolltoc

SPARC Logical Domains: Live Migration

One of the ways that we are able to accomplish regularly scheduled maintenance is by utilizing Live Migration, with this we can migrate workloads from one physical machine to another without having service interruption.  The way that it is done with Logical Domains is much more flexible than with most other hypervisor solutions, it doesn’t require any complicated cluster setup, no management layer, so you could literally utilize any compatible hardware at the drop of the hat.

This live migration article also focuses on some technology that I have written on, but not yet published (should be published within the next week), this technology is Alternate Service Domains, if you are using this then Live Migration is still possible, and if you are not using it, then Live Migration is actually easier (as the underlying devices are simpler, so it is simpler to match them).

Caveats to Migration

  • Virtual Devices must be accessible on both servers, via the same service name (though the underlying paths may be different).
  • IO Domains cannot be live migrated.
  • Migrations can be either online “live” or offline “cold” the state of the domain determines if it is live or cold.
  • When doing a cold migration virtual devices are not checked to ensure they exist on the receiving end, you will need to check this manually.

Live Migration Dry Run

I recommend performing a dry run of any migration prior to performing the actual migration.  This will highlight any configuration problems prior to the migration happening.

# ldm migrate-domain -n ldom1 root@server<br />

Target Password:

This will generate any errors that would generate in an actual migration, however it will do so without actually causing you problems.

Live Migration

When you are ready to perform the migration then remove the dry run flag.  This process will also do the appropriate safety checks to ensure that everything is good on the receiving end.

# ldm migrate-domain ldom1 root@server<br />

Target Password:

Now the migration will proceed and unless something happens it will come up on the other system.

Live Migration With Rename

We can also rename the logical domain as part of the migration, we simply specify the new name.

# ldm migrate-domain ldom1 root@server:ldom2<br />

Target Password:

In this case, the original name was ldom1 and the new name is ldom2.

Common Errors

Here are some common errors.

Bad Password or No LDM on Target

# ldm migrate-domain ldom1 root@server<br />

Target Password:<br />

Failed to establish connection with ldmd(1m) on target: server<br />

Check that the 'ldmd' service is enabled on the target machine and<br />

that the version supports Domain Migration. Check that the 'xmpp_enabled'<br />

and 'incoming_migration_enabled' properties of the 'ldmd' service on<br />

the target machine are set to 'true' using svccfg(1M).

Probable Fixes – Ensure you are attempting to migrate to the correct hypervisor, you have the username/password combination correct, and that the user has the appropriate level of access to ldmd and that ldmd is running.

Missing Virtual Disk Server Devices

# ldm migrate-domain ldom1 root@server<br />

Target Password:<br />

The number of volumes in mpgroup 'zfs-ib-nfs' on the target (1) differs<br />

from the number on the source (2)<br />

Domain Migration of LDom ldom1 failed

Probable Fixes – Ensure that the underlying virtual disk devices match, if you are using mpgroups, then the entire mpgroup must match on both sides.

Missing Virtual Switch Device

# ldm migrate-domain ldom1 root@server<br />

Target Password:<br />

Failed to find required vsw alternate-vsw0 on target machine<br />

Domain Migration of LDom logdom1 failed

Probable Fixes – Ensure that the underlying virtual switch devices match on both locations.

Check Migration Progress

One thing to keep in mind is that during the migration process, the hypervisor that is being evacuated is the authoritative one in terms of controlling the process, so status should be checked there.

source# ldm list -o status ldom1<br />

NAME<br />

logdom1 </p>

<p>STATUS<br />

 OPERATION PROGRESS TARGET<br />

 migration 20% 172.16.24.101:logdom1

It can however be checked on the receiving end, though it will look a little bit different.

target# ldm list -o status logdom1<br />

NAME<br />

logdom1</p>

<p>STATUS<br />

 OPERATION PROGRESS SOURCE<br />

 migration 30% ak00176306-primary

The big thing to notice is that it shows the source on this side, also if we changed the name as part of the migration it will also show the name using the new name.

Cancel Migration

Of course, if you need to cancel a migration, this would be done on the hypervisor that is being evacuated, since it is authoritative.

# ldm cancel-operation migration ldom1<br />

Domain Migration of ldom1 has been canceled

This will allow you to cancel any accidentally started migrations, however likely anything that you needed to cancel would generate an error before needing to do this.

Cross CPU Considerations

By default, logical domains are created to use very specific CPU features based on the hardware it runs on, as such live migration only works by default on the exact same CPU type and generation.  However, if we change the CPU

Native – Allows migration between same CPU type and generation.

Generic – Allows the most generic processor feature set to allow for widest live migration capabilities.

Migration Class 1 – Allows migration between T4, T5 and M5 server classes (also supports M10 depending on firmware version)

SPARC64 Class 1 – Allows migration between Fujitsu M10 servers.

Here is an example of how you would change the CPU architecture of a domain.  I personally recommend using this sparingly and building your hardware infrastructure in a way where you have the capacity on the same generation of hardware, however, in certain circumstances this can make a lot of sense if the performance implications are not too great.

# ldm set-domain cpu-arch=migration-class1 ldom1

I personally wouldn’t count on the Cross-CPU functionality, however, in some cases it might make sense for your situation, either way, Live Migration of Logical Domains is done in a very effective manner and adds a lot of value.

Solaris 11: Configure IP Over Infiniband Devices

In this article we will be going over the configuration of an infiniband interface with the IPoIB protocol on Solaris 11, specifically Solaris 11.2 (previous versions of Solaris 11 should work the same, however, there have been changes in the ipadm and dladm commands).

Identify Infiniband Datalinks

First we need to identify the underlying interfaces of the infiniband interfaces.  In my case net2 and net3.

# dladm show-phys<br />

LINK MEDIA STATE SPEED DUPLEX DEVICE<br />

net1 Ethernet unknown 0 unknown ixgbe1<br />

net0 Ethernet up 1000 full ixgbe0<br />

net2 Infiniband up 32000 unknown ibp0<br />

net3 Infiniband up 32000 unknown ibp1<br />

net5 Ethernet up 1000 full vsw0

Another way to confirm the infiniband interfaces is to use the show-ib command.

# dladm show-ib<br />

LINK HCAGUID PORTGUID PORT STATE GWNAME GWPORT PKEYS<br />

net2 10E0000128EBC8 10E0000128EBC9 1 up kel01-gw01 0a-eth-1 7FFF,FFFF<br />

 kel01-gw02 0a-eth-1<br />

net3 10E0000128EBC8 10E0000128EBCA 2 up kel01-gw01 0a-eth-1 7FFF,FFFF<br />

 kel01-gw02 0a-eth-1

Rename Infiniband Datalinks

I like to rename the datalinks to ib0 and ib1, it makes it easier to keep everything nice and tidy.

# dladm rename-link net2 ib0<br />

# dladm rename-link net3 ib1

Now to show the updated datalinks.

# dladm show-phys<br />

LINK MEDIA STATE SPEED DUPLEX DEVICE<br />

net1 Ethernet unknown 0 unknown ixgbe1<br />

net0 Ethernet up 1000 full ixgbe0<br />

ib0 Infiniband up 32000 unknown ibp0<br />

ib1 Infiniband up 32000 unknown ibp1<br />

net5 Ethernet up 1000 full vsw0

Now in subsequent actions we will use ib0 and ib1 as our datalinks.

Create Infiniband Partition

First, let's talk about partitions, partitions are most closely related to VLANs.  However the purpose of partitions is to provide isolated segments, so there is no concept of a “router” on IB.  So your use case might be for isolating storage or database services or even isolating customers from one another (which you definitely should do if you have a multitenant environment where customers have access to the operating system.  So what we want to do is identify the partition to be created, if you do not use IB partitioning, then you will need to use the “default” partition of ffff.

# dladm create-part -l ib0 -P 0xffff pffff.ib0

If you do use partitioning, then you will need to define the partition that you wish to use, for this example 7fff.  Which partition to use is determined by the dladm show-ib output, it lists the PKEY that are available, these are partitions.

# dladm create-part -l ib0 -P 0x7fff p7fff.ib0

Now lets review the partitions.

# dladm show-part<br />

LINK PKEY OVER STATE FLAGS<br />

pffff.ib0 FFFF ib0 unknown ----<br />

p7fff.ib0 7FFF ib0 unknown ----

We now have our two partitions defined.

Create IP Interfaces

Now that we have the Infiniband pieces configured, we simply create the IP interfaces, so that we can subsequently assign an IP address, the IP interfaces are named as follows (ibpartition.interfacename).  Below is for the “default” partition.

# ipadm create-ip pffff.ib0

And for our named partition for 7fff we create an interface as well.

# ipadm create-ip p7fff.ib0

Now we have our interfaces configured correctly.

Create IP Address

Now the easy part this is exactly the same as we would do with a standard ethernet interface.  Assign a static IP address for the default partition.

# ipadm create-addr -T static -a 10.1.10.11/24 pffff.ib0/v4

Also for our named partition.

# ipadm create-addr -T static -a 10.2.10.11/24 p7fff.ib0/v4

Now a few ping tests and we are in business.  Remember you will not be able to ping from one partition to another, so you will need to identify a few endpoints on your existing Infiniband networks to test your configuration.

Adventures in ZFS: Mirrored Rpool

It always makes sense to have a mirrored rpool for your production systems, however, that is not always how they are configured.  This really simple procedure is also critical.

Create a Mirrored Zpool

Check the existing devices to identify the one currently in use.

# zpool status rpool<br />

  pool: rpool<br />

 state: ONLINE<br />

 scan: none requested<br />

config:</p>

<p> NAME STATE READ WRITE CKSUM<br />

 rpool ONLINE 0 0 0<br />

 c0t5000CCA0436359CCd0 ONLINE 0 0 0</p>

<p>errors: No known data errors

Once we know which one is currently in use, we need to find a different one to mirror onto.

# format<br />

Searching for disks...done</p>

<p>AVAILABLE DISK SELECTIONS:<br />

  1. c0t5000CCA0436359CCd0 &lt;HITACHI-H109030SESUN300G-A606-279.40GB&gt;<br />

 /scsi_vhci/disk@g5000cca0436359cc<br />

 /dev/chassis/SPARC_T5-2.AK00176306/SYS/SASBP/HDD0/disk<br />

  1. c0t5000CCA043650CD8d0 &lt;HITACHI-H109030SESUN300G-A31A cyl 46873 alt 2 hd 20 sec 625&gt; solaris<br />

 /scsi_vhci/disk@g5000cca043650cd8<br />

 /dev/chassis/SPARC_T5-2.AK00176306/SYS/SASBP/HDD1/disk<br />

Specify disk (enter its number):

Then we can build our mirrored rpool, this part is exactly the same as creating a mirror for any other zpool.

# zpool attach rpool c0t5000CCA0436359CCd0 c0t5000CCA043650CD8d0<br />

vdev verification failed: use -f to override the following errors:<br />

/dev/dsk/c0t5000CCA043650CD8d0s0 contains a ufs filesystem.<br />

/dev/dsk/c0t5000CCA043650CD8d0s6 contains a ufs filesystem.<br />

Unable to build pool from specified devices: device already in use

Now in some cases, the new disk will have an existing file system on it, in that case we will need to force it, however please use caution when using force, this could cause you problems if you have multiple zpools on a system.

# zpool attach -f rpool c0t5000CCA0436359CCd0 c0t5000CCA043650CD8d0<br />

Make sure to wait until resilver is done before rebooting.

Now that will start the resilvering process, and we must wait for that to finish completely before rebooting.  So depending on the size of your disks it might be time for coffee or lunch.

# zpool status rpool<br />

 pool: rpool<br />

 state: DEGRADED<br />

status: One or more devices is currently being resilvered. The pool will<br />

 continue to function in a degraded state.<br />

action: Wait for the resilver to complete.<br />

 Run 'zpool status -v' to see device specific details.<br />

 scan: resilver in progress since Fri Nov 28 10:11:03 2014<br />

 224G scanned<br />

 6.67G resilvered at 160M/s, 2.86% done, 0h23m to go<br />

config:</p>

<p> NAME STATE READ WRITE CKSUM<br />

 rpool DEGRADED 0 0 0<br />

 mirror-0 DEGRADED 0 0 0<br />

 c0t5000CCA0436359CCd0 ONLINE 0 0 0<br />

 c0t5000CCA043650CD8d0 DEGRADED 0 0 0 (resilvering)</p>

<p>errors: No known data errors

Lets check again and see if this is nearly ready.

# zpool status rpool<br />

pool: rpool<br />

state: ONLINE<br />

scan: resilvered 224G in 0h27m with 0 errors on Fri Nov 28 10:38:25 2014<br />

config:</p>

<p>NAME STATE READ WRITE CKSUM<br />

rpool ONLINE 0 0 0<br />

mirror-0 ONLINE 0 0 0<br />

c0t5000CCA0436359CCd0 ONLINE 0 0 0<br />

c0t5000CCA043650CD8d0 ONLINE 0 0 0</p>

<p>errors: No known data errors

Now if you are just trying to mirror any zpool that is the end of it.  However if this is rpool then your mirror will not be worth anything if it doesn’t include the boot blocks.

Install Boot Blocks on SPARC

If your system is SPARC, you will use the installboot utility to install the boot blocks on the disk to ensure you will be able to boot from it in the event of primary disk failure.

# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t5000CCA043650CD8d0s0<br />

WARNING: target device /dev/rdsk/c0t5000CCA043650CD8d0s0 has a versioned bootblock but no versioning information was provided.<br />

bootblock version installed on /dev/rdsk/c0t5000CCA043650CD8d0s0 is more recent or identical<br />

Use -f to override or install without the -u option

Again if this disk is not brand new it might have existing boot blocks on it which we will need to force overwrite.

# installboot -f -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t5000CCA043650CD8d0s0

This wraps it up for a SPARC installation, it, of course, makes sense to test booting to the second disk as well.

Install Boot Blocks on x86

If you are using an x86 system, then you will need to use the installgrub utility.

# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c0t5000CCA043650CD8d0s0

There you have it.  We have successfully mirrored our x86 system as well.

Linux KVM: Bridging a Bond on CentOS 6.5

Today we are going to hop back into the KVM fray, and take a  look at using CentOS as a hypervisor., and configuring very resilient network connections to support our guests.  Of course these instructions should be valid on Red Hat Linux and Oracle Linux as well, though there is a little more to be done around getting access to the repos on those distributions…

Enable Bonding

I am assuming this is a first build for you, so this step might not be applicable, but it won’t hurt anything.

# modprobe --first-time bonding

Configure the Physical Interfaces

In our example we will be using two physical interfaces, eth0 and eth1.  Here are the interface configuration files.

# cat /etc/sysconfig/network-scripts/ifcfg-eth0<br />

DEVICE=eth0<br />

HWADDR=XX:XX:XX:XX:XX:XX<br />

ONBOOT=yes<br />

NM_CONTROLLED=no<br />

BOOTPROTO=none<br />

MASTER=bond0<br />

SLAVE=yes<br />

USERCTL=no

# cat /etc/sysconfig/network-scripts/ifcfg-eth1<br />

DEVICE=eth1<br />

HWADDR=XX:XX:XX:XX:XX:XX<br />

ONBOOT=yes<br />

NM_CONTROLLED=no<br />

BOOTPROTO=none<br />

MASTER=bond0<br />

SLAVE=yes<br />

USERCTL=no

Configure the Bonded Interface

Here we are going to bond the interfaces together, which will increase the resiliency of the interface.

# cat /etc/sysconfig/network-scripts/ifcfg-bond0<br />

DEVICE=bond0<br />

ONBOOT=yes<br />

NM_CONTROLLED=no<br />

BOOTPROTO=none<br />

USERCTL=no<br />

BONDING_OPTS=&quot;mode=1 miimon=100&quot;<br />

BRIDGE=br0

Configure the Bridge

The final step is to configure the bridge itself, which is what KVM creates the vNIC on to allow for guest network communication.

# cat /etc/sysconfig/network-scripts/ifcfg-br0<br />

DEVICE=br0<br />

TYPE=Bridge<br />

ONBOOT=yes<br />

NM_CONTROLLED=no<br />

BOOTPROTO=none<br />

USERCTL=no<br />

IPADDR=192.168.1.10<br />

NETMASK=255.255.255.0<br />

GATEWAY=192.168.1.1<br />

DELAY=0

Service Restart

Finally the easy part.  Now one snag I ran into.  If you created IP addresses on bond0, then you will have a tough time getting rid of that with a service restart alone.  I found it was easier to reboot the box itself.

# service network restart

BlackBerry OS 10: Caldav Setup with Zimbra

I have owned my Blackberry Z10, going on a year now, and I have absolutely loved it.  However, one of the issues that I have fought was in integrating it with my Zimbra Installation.  Email was easy, the IMAP protocol sorted that out easily enough… However, calendars turned out to be more of a challenge than I expected.

Here is the versions that I validated these steps on.

  • Blackberry Z10 with 10.2.1.2977
  • Zimbra Collaboration Server 8.5.0

Here is how to get it done.

Figure 1-1 – System Settings

Figure 1-1 gets us started, I am assuming that you know how to find the settings on BB10, but once there go into the Accounts section.

Figure 1-2 – Accounts

Figure 1-2 is a listing of all of the existing accounts, with mine obfuscated, of course, however, we are going to be adding another one, so we select Add Account.

Figure 1-3 – Add Accounts

You can see above in Figure 1-3, that we don’t use the “Subscribed Calendar” selection, but instead go to Advanced.  When I used Subscribed Calendar, it was never able to successfully perform a synchronization.

Figure 1-4 – Advanced Setup

In Figure 1-4 we are selecting CalDAV as the type of Account to use.  Also a little footnote, I was unable to get CardDAV working. I will provide an update or another article if I find a way around this.

Figure 1-5 – CalDAV Settings

In Figure 1-5 we are populating all of the information needed to make a connection.  Please keep in mind, that we need to use user@domain.tld for the username, and the Server Address should be in the following format:  https://zimbra.domain.tld/dav/user@domain.tld/Calendar. The important bits here are (1) https – I suspect http works as well, but I did not validate (2) username – the username is a component of the URI, this makes it a little tough to implement for less sophisticated users (3) Calendar – the default calendar for all Zimbra users is named “Calendar” – with a capital “C” not sure if you can have calendars with other names, but this is the name needed for most situations.

Now set your password and sync interval and you should be ready to go.

IT Trends, Change and The Future…A Conversation With an Industry Veteran

As a technology and healthcare centric marketing firm, we at illumeture work with emerging companies in achieving more right conversations with right people. Part of that work comes in learning and sharing the thought leadership and subject matter expertise of our clients with the right audiences. Mark Johnson is Vice President with GuideIT responsible for Account Operations and Delivery.  Prior to joining GuideIT, Mark spent 23 years with Perot Systems and Dell, the last 6 years leading business development teams tasked with solutioning, negotiating and closing large healthcare IT services contracts.  We sat down with Mark for his perspective on what CIOs should be thinking about today. 

Q:  You believe that a number of fundamental changes are affecting how CIOs should be thinking about both how they consume and deliver IT services – can you explain?

A:  Sure.  At a high level, start with the growing shift from sole-source IT services providers to more of a multi-sourcing model.  A model in which CIOs ensure they have the flexibility to choose among a variety of application and services providers, while maintaining the ability to retain those functions that make sense for a strategic or financial reason.  The old sourcing model was often binary, you either retained the service or gave it to your IT outsourcing vendor.  Today’s environment demands a third option:  the multi-source approach, or what we at GuideIT call “Flex-Sourcing”.

Q:  What’s driving that demand?

A:  A number of trends, some of which are industry specific.  But two that cross all industries are the proliferation of Software as a Service in the market, and cloud computing moving from infancy to adolescence.

Q:  Software as a Service isn’t new.

A:  No it isn’t.  But we’re moving from early adopters like salesforce.com to an environment where new application providers are developing exclusively for the cloud, and existing providers are executing to a roadmap to get there.  And not just business applications; hosted PBX is a great example of what used to be local infrastructure moving to a SaaS model in the cloud.  Our service desk telephony is hosted by one of our partners – OneSource, and we’re working closely with them to bring hosted PBX to our customers.  E-mail is another great example.  In the past I’d tee up email as a service to customers, usually either Gmail or Office365, but rarely got traction.  Now you see organizations looking hard at either a 100% SaaS approach for email, or in the case of Exchange, a hybrid model where organizations classify their users, with less frequent users in the cloud, and super-users hosted locally.  GuideIT uses Office365 exclusively, yet I still have thick-client Outlook on my PC and the OWA application on both my iPhone and Windows tablet.  That wasn’t the case not all that long ago and I think we take that for granted.

Q:  And you think cloud computing is growing up?

A:  Well it’s still in grade school, but yes, absolutely.  Let’s look at what’s happened in just a few short years, specifically with market leaders such as Amazon, Microsoft and Google.  We’ve gone from an environment of apprehension, with organizations often limiting use of these services for development and test environments, to leading application vendors running mission critical applications in the cloud, and being comfortable with both the performance/availability and the security of those environments.  On top of that, these industry leaders are, if you’ll excuse the comparison, literally at war with each other to drive down cost, directly benefiting their customers.  We’re a good ways away from a large organization being able to run 100% in the cloud, but the shift is on.  CIOs have to ensure they are challenging the legacy model and positioning their organizations to benefit from both the performance and flexibility of these environments, but just as importantly the cost. 

Q:  How do they do that?

A:  A good place to start is an end to end review of their infrastructure and application strategy to produce a roadmap that positions their organization to ride this wave, not be left behind carrying the burden of legacy investments.  Timing is critical; the pace of change in IT today is far more rapid than the old mainframe or client-server days and this process takes planning.  That said, this analysis should not be just about a multi-year road-map.  The right partner should be able to make recommendations around tactical initiatives, the so-called “low-hanging fruit” that will generate immediate cost savings, and help fund your future initiatives.  Second, is to be darn sure you don’t lock yourself into long-term contracts with hosting providers, or if you do ensure you retain contractual flexibility that goes well beyond contract bench-marking.  You have to protect yourself from the contracting model where vendors present your pricing in an “as a service” model, but are really just depreciating capital purchased on your behalf in the background.  You might meet your short-term financial objectives, but I promise in short order you’ll realize you left money on the table.  At Guide IT we’re so confident in what we can deliver that if a CIO engages GuideIT for an enterprise assessment, and isn’t happy with the results, they don’t pay.

Q:  You’ve spent half your career in healthcare – how do you see these trends you’ve discussed affecting the continuity of care model?

A:  Well we could chat about just that topic for quite some time.  My “ah-ha moments” tend to come from personal experience.  I’ll give you two examples.  Recently I started wearing a FitBit that syncs with my iPhone.  On a good day, the device validates my daily physical activity; but to be honest, too often reminds me that I need to do a better job of making exercise a mandatory part of my day.  Today that data is only on my smartphone – tomorrow it could be with my family physician, in my PHR, or even with my insurer to validate wellness premium discounts.  The “internet of things” is here and you just know these activity devices are the tip of the iceberg.  Your infrastructure and strategy roadmap have to be flexible enough to meet today’s requirements, but also support what we all know is coming, and in many cases what we don’t know is coming.  Today’s environment reminds me of the early thin client days that placed a premium on adopting a services-oriented architecture.

Second is my experience with the DNA sequencing service 23andme.com.  I found my health and ancestry data fascinating, and though the FDA has temporarily shut down the health data portion of the service, there will come a day very soon that we’ll view the practice of medicine without genome data as akin to the days without antibiotics and MRIs.  Just as they are doing with the EMR Adoption Model, CIOs should ask themselves where they’re at on the Healthcare Analytics Adoption Model and what their plan is to move to the advanced stages - the ones beyond reimbursement.  A customer of mine remarked the other day that what’s critical about the approach to analytics is not “what is the answer?” but rather “what is the question?”  And he’s right.

Voyage Linux: Dialog Error with Apt

This can happen on other Linux distributions, however, in this case, I found it on Voyage Linux, which is a Linux distribution for embedded hardware.

The Error

Here we are dealing with an annoyance whenever you use apt-get or aptitude.

debconf: unable to initialize frontend: Dialog<br />

debconf: (No usable dialog-like program is installed, so the dialog-based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76, &lt;&gt; line 1.)<br />

debconf: falling back to frontend: Readline

The Fix

Simply install dialog, which is the package it is not finding.  This will no longer need the failback to readline.

# apt-get install dialog

Once the dialog package has been installed the issue will no longer occur on subsequent runs of apt-get or aptitude.

Voyage Linux: Locale Error with Apt

Voyage Linux is an embedded linux distribution.  I use it on some ALIX boards I have lying around, it is very stripped down, and as such there are a few annoyances which we have to fix.

The Error

This issue happens when attempting to install/upgrade packages using apt-get or aptitude.

perl: warning: Setting locale failed.<br />

perl: warning: Please check that your locale settings:<br />

    LANGUAGE = (unset),<br />

    LC_ALL = (unset),<br />

    LANG = &quot;en_US.utf8&quot;<br />

are supported and installed on your system.<br />

perl: warning: Falling back to the standard locale (&quot;C&quot;).

The Fix

We simply need to set the locales to use en_US.UTF-8 or whichever locale is correct for your situation.

# locale-gen --purge en_US.UTF-8<br />

# echo &quot;LANG=en_US.UTF-8&quot; &gt;&gt; /etc/default/locale<br />

# update-locale

Now subsequent runs of apt-get or aptitude will no longer generate the error.

Adventures in ZFS: Splitting a Zpool
SQL Developer Crash on Fedora 20

I ran into a painful issue on Fedora 20 with SQL Developer.  Basically every time it was launched via the shortcut it would go through loading, and then disappear.

Manual Invocation of SQL Developer

When launching it via the script itself it gives us a little more information.

$ /opt/sqldeveloper/sqldeveloper.sh</p>

<p>Oracle SQL Developer<br />

Copyright (c) 1997, 2013, Oracle and/or its affiliates. All rights reserved.</p>

<p>&amp;nbsp;</p>

<p>LOAD TIME : 279#<br />

# A fatal error has been detected by the Java Runtime Environment:<br />

#<br />

# SIGSEGV (0xb) at pc=0x00000038a1e64910, pid=12726, tid=140449865832192<br />

#<br />

# JRE version: Java(TM) SE Runtime Environment (7.0_40-b43) (build 1.7.0_40-b43)<br />

# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.0-b56 mixed mode linux-amd64 compressed oops)<br />

# Problematic frame:<br />

# C 0x00000038a1e64910<br />

#<br />

# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try &quot;ulimit -c unlimited&quot; before starting Java again<br />

#<br />

# An error report file with more information is saved as:<br />

# /opt/sqldeveloper/sqldeveloper/bin/hs_err_pid12726.log<br />

[thread 140449881597696 also had an error]<br />

#<br />

# If you would like to submit a bug report, please visit:<br />

# http://bugreport.sun.com/bugreport/crash.jsp<br />

#<br />

/opt/sqldeveloper/sqldeveloper/bin/../../ide/bin/launcher.sh: line 611: 12726 Aborted (core dumped) ${JAVA} &quot;${APP_VM_OPTS[@]}&quot; ${APP_ENV_VARS} -classpath ${APP_CLASSPATH} ${APP_MAIN_CLASS} &quot;${APP_APP_OPTS[@]}&quot;

I also noticed, that while executing as root it worked.  However that clearly isn’t the “solution”

Fixing the Problem

Here we need to remove the GNOME_DESKTOP_SESSION_ID as part of the script.

$ cat /opt/sqldeveloper/sqldeveloper.sh<br />

#!/bin/bash<br />

unset -v GNOME_DESKTOP_SESSION_ID<br />

cd &quot;`dirname $0`&quot;/sqldeveloper/bin &amp;&amp; bash sqldeveloper $*

Once this was completed, SQL Developer launched clean for me.

 

Banking Institution Improves Security Management & Response

A publicly traded financial firm was seeking to better manage security requirements facing the business. Disparate systems within the IT environment required constant updating as new security patches were released, exposing the company to the risk of falling short of regulatory requirements.

GuideIT designed and implemented a patch management process to address ongoing updates within the environments. The patch management solution identified and updated over 130,000 security patches in the first 6 months.

GuideIT also provided a dedicated Incident Response Analyst to triage alerts and escalations, addressing a critical gap within the security organization. Working with the CISO, the analyst evaluated the infrastructure, policies and procedures, recommended improvements, and improved response time with alerting, reporting, and remediation.

End User Protection for Large Campus-Style Retail Environment

GuideIT provides strategic cybersecurity partnership to a campus-style commercial retail environment through consulting, infrastructure, and end-user protection security solutions to implement a defense-in- depth security strategy and position the organization for the future.

The Customer

A sprawling, campus-style retail environment routinely serves over one million annual visitors. The IT infrastructure has become an increasingly important component of the operations touching everything from facilities operations to customer care and internal communications. As the organization continues to grow, new technologies will further enhance operations and marketing outreach as it seeks to expand the customer base.

The Challenge

The organization recently sought a strategic technology partner to provide a comprehensive managed security solution protecting users and the IT environment from risks related to malware, ransomware, email threats, and critical security updates. It faced numerous challenges related to implementing and managing a defense-in-depth cybersecurity strategy.

An aging infrastructure and application environment paired with a lack of internal resources led to a struggle on the part of the organization to keep pace with a changing threat landscape and cybersecurity best practices. The customer realized that email in particular represented significant risk due to the ever-increasing volume of spam and potentially dangerous attachments at the email threat vector. non-technical end users did not have the proper training or awareness to protect the organization, leading to increased risk of a potentially damaging attack.

The existing security solution did NOT:

» Actively monitor the environment
» Centrally manage patches and updates
» Enable scalability & adaptability
» Provide for remote management & Maintenance

GuideIT Cyber Security solutions safeguard organizations against malicious cyber threats. We utilize individualized approach to provide comprehensive protection that aligns with industry best practices. GuideIT end-user protection enables defense-in-depth strategies for end-user devices such as laptops, desktops and mobile devices which are targeted by malicious actors to gain access into enterprise networks.

The Solution

GuideIT developed a solution to holistically address shortcomings of the aging infrastructure and application environment with a fully managed approach. Comprehensive management and monitoring services focused on endpoint security would address the risk to the environment at the end-user attack surface. A robust strategy for patch management would ensure the environment was properly safeguarded against existing vulnerabilities with the latest updates available. Email security comprising of inbound traffic scanning, link protection, and threat quarantine, would mitigate the risk of ransomware phishing attempts, and malicious payloads. A centrally managed data protection strategy would protect against data loss with full data encryption and in browser web monitoring.

Solution Benefits

» Central management & monitoring
» End-to-end data encryption
» Web monitoring & protection
» Real-time malware protection
» Patch management and deployment
» Email link & attachment scanning
» Outbound data protection
» End user threat awareness training

Why GuideIT

IDENTIFY > PROTECT > DETECT > EDUCATE

GuideIT takes a holistic view of the security environment to evaluate the full threat landscape and identify unique vulnerabilities within an organization. Customers benefit from best-in-class security tools paired with a consultative, strategic approach. Leveraging a defense-in-depth framework that aligns with NIST best practices, the GuideIT security solutions methodology focuses on root cause analysis, visibility, and data-driven decision making to deliver an end-to-end cybersecurity strategy that hardens the IT infrastructure against attacks while also promoting security awareness within the entire organization.

GuideIT developed a comprehensive plan to transform the cybersecurity strategy with a defense-in-depth model. Levering industry best practices and the NIST framework, GuideIT assessed the landscape to identify threats and vulnerabilities, created a plan to address risks and promote awareness, and deployed solutions to secure the infrastructure and change end-user behavior, securing the IT environment.

The Implementation

1. ASSESSMENT - Upon initiation of the project, GuideIT quickly performed a comprehensive assessment of the environment to identify and evaluate legacy and stand-alone security solutions in place. High risk devices were identified and prioritized for phase one. Infrastructure and existing security postures were evaluated and tested.
2. PLANNING - With data collected from the assessment, GuideIT cybersecurity professionals developed a comprehensive plan and to address issues with patch management, end-point protection, infrastructure security, and email security.
3.DEPLOYMENT - With data collected from the assessment, agents were deployed within a week to immediately deploy the centrally managed end-point protection solution. The patching program was also deployed targeting the most critical and vulnerable devices first.

The Results

The team identified systems in the environment that had not been actively patched in over six months. The systems were updated and brought into compliance with the policy. Initially, less than 35% of the environment was current with patches released within 30 days. Since implementation of new patch management processes and tools, the environment now maintains a 30-day rolling update ratio of over 95%.

Since the deployment of managed anti-virus, over 400 threats associated with malware, exploits and attempted access have been either blocked or resolved, ensuring the endpoints and users are secure. The email security solution initially scanned over 83,000 emails effectively protecting the organization from nearly 20 different malware threats and over 50 individual phishing attempts. 27,000 links were scanned and protected, resulting in 70,000 clean messages being successfully derived during the initial deployment.

GuideIT Once Again Recognized Among Fastest Growing Private Companies by SMU Caruth Institute & Dallas Business Journal

Monday, October 26, 2020 – Plano, TX – GuideIT, a leading provider of managed IT and cloud solutions, today announced that it has once again been named one of the fastest growing entrepreneurial companies for a third year in the SMU Cox Dallas 100™ awards.

The Dallas 100, co-founded by the SMU Caruth Institute for Entrepreneurship and the Dallas Business Journal, recognizes the innovative spirit, determination and business acumen of area Dallas-area entrepreneurs.  The award focuses not only on growth, but an organization’s character and creditworthiness.

“We are once again honored to be selected for the Dallas 100.” said Chuck Lyles, CEO for GuideIT. “It demonstrates our continued commitment to bringing leading edge solutions to market. We place a high value on the entrepreneurial spirit which has contributed to the success and growth which we have experienced over the last several years.”

About GuideIT

GuideIT delivers solutions to drive business success through technology. Through consulting, managed services, digital business, and cybersecurity solutions, GuideIT partners with customers, simplifies the complex, and inspires confidence while delivering technology with an industry specific context to enable the creation of business value and create an IT experience that delivers. 

Founded in 2013 and building on a heritage that dates to the industry’s founding, GuideIT has been recognized for its service quality, positive work environment and growth.  Learn more at www.guideit.com

Healthcare Management Organization Realizes Cost Savings with AWS

Customer Profile

Our customer is a premier national provider of population healthcare management programs. For more than 40 years, they have offered value-added programs to plan sponsors that improve the overall health of engaged participants, including Integrated Clinical Solutions, Chronic Care Management, Behavioral Health Solutions, Wellness/Lifestyle Coaching, and Care Coordination.

The Challenge

Our customer was experiencing cost inefficiencies with their current server which caused them to have less flexibility and control over their solution.

The Solution

GuideIT recommended moving the customer from their current server, Armor, and moving it into AWS EC2 and AWS SE. Through this solution, the customer will realize a reduction in cost, and greater durability and recoverability.

AWS Services

  • Managed Microsoft Sequel Server (RDS)
  • AWS EC2 with Microsoft Server
  • AWS S3

Metrics for Success

  • Introduce cost savings with new AWS server
  • Increase data durability and recoverability
  • Reduce administration needs

The Result

  • Achieved greater than 30% reduction in cost through new solution
  • Successfully migrated server from Armor into a Managed Microsoft SQL Server
  • Eliminated the costly necessity of administrators manually pulling reports from the old system
  • Increased durability and recoverability through daily snapshots of AWS EC2 and AWS RDS

The Integration Architecture

  • TIBCO BusinessWorks installed on the EC2 instance retrieves Medical files from HMC clients, pushes a copy to AWS S3, processes files and pushes converted X12 data to HMC Healthworks
  • The file processes match customer data and create unique ids using Amazon RDS “Microsoft SQL Server”
  • Snapshots of AWS EC2 and AWS RDS are created daily to AWS S3
  • Recovery involves restoring snapshots and rerunning files for day

 

Introducing a New Website and Online Experience from GuideIT
Introducing a New Website and Online Experience from GuideIT

As the world of technology continues to evolve into the future at a rapid pace, so does GuideIT. We are proud to announce that our new and improved website is here to provide more functionality for your outsourced IT experience. Here are all of the ways that our revamped website is working harder to provide a new online experience for your GuideIT services:

Continuing
Education from GuideIT

Our new website provides continuing education on all of the latest trends in the IT industry from our perspective. Here, you can stay up to date on the changing world of technology by diving into the details of what makes it great. We understand that being dedicated to IT strategy and transformation means providing our clients with the details they need to succeed.

A New
Design to Match Our Services

Our new website comes complete with an updated look designed to make navigating through our information easier. Just like with our services, we want the online experience we provide our customers to be as quick, simple and efficient as possible. We respect your time and money in everything we do, and our new website is certainly no exception to that rule.

Case
Studies to Learn About Our Services

We have implemented several case studies that are aimed at helping our customers learn more about our services and understand their importance. Here, you can get an in-depth look at how GuideIT has helped a countless number of companies optimize their technology and achieve their business goals. Take a look at our new case studies today to learn about the impact our services have made for our clients.

No matter how you hope to achieve operational excellence in your business, GuideIT is here to help with the same services you know and love. From managed IT services to management consulting and all of your cyber security needs, we provide services that can help businesses of all kinds thrive. Want to learn more about how GuideIT can help you? Check out our blog today!

The Latest Trends in Information Technology

GuideIT’s very own Chuck Lyles, CEO, recently sat in on the HIMSS SoCal Podcast to discuss emerging trends in information technology and how it relates to the healthcare industry. Listen in to learn about COVID-19’s impact to the IT industry, the importance of the Clinical Service Desk and the latest outsourcing trends in technology. Click the link below to learn more.

Catalyst Health and GuideIT’s Strategic Services Relationship

GuideIT serves as Catalyst Health’s strategic IT services partner and enables better results through increased customer satisfaction, improved cost-efficiency ratios, and greater infrastructure reliability and availability. Services include clinical and technical service desk, end user support, service management, infrastructure technology operations support, network management, and information technology security support.

The Customer

Catalyst Health is a URAC-accredited clinically integrated network of primary care physicians who have come together to provide high-quality care, helping communities thrive. Catalyst Health began its network of independent primary care physicians in 2015 in North Texas. In the four short years that followed, Catalyst Health has grown to nearly 1,000 primary care providers, with over 300 office locations, and 100 care team members, serving over one million patients. To date, Catalyst Health has saved more than $55 million for the communities it serves. Catalyst Health coordinates care, improves health, and lowers cost – creating sustainable and predictable value.

The Challenge

To support the rapid growth they were experiencing, Catalyst Health needed to transform their current Information Technology environment. The organization was building a new care management platform and expanding upon their existing professional service offerings to independent physician practices. Support of these initiatives would require remediating their current environment as the existing infrastructure support model was too costly.

The organization was seeking a partnership with a Managed Services provider to aid in implementing and supporting a 24x7 scalable model that would improve overall customer satisfaction, provide greater alignment to the business owners, and reduce overall cost as growth occurred. To achieve success of these initiatives, the organization would need to address the following:

  • Implement a high availability infrastructure to minimize downtime and service interruptions
  • Greater focus on end users and responsiveness with Service Level metrics and continuous improvement to support caregivers across the organization
  • Implement ITIL-based best practice standards across the organization that align IT services with the needs of the business
  • Improve cost efficiency ratio as growth occurs

“The integration of technology has been a vital part of Catalyst’s growth, driving our innovation and allowing us to accomplish our mission of helping communities thrive. GuideIT’s strategic direction has not only made our internal team more connected but has also allowed the physicians in our network to strengthen their relationships with their patients, all while saving everyone time and money. It’s been a win-win situation for all”
- Dr. Christopher Crow

The Solution

Catalyst Health determined the best approach to achieve the objectives of the business expansion would be to engage GuideIT to tap into their Managed Services solutions that would assume IT leadership and provide subject matter experts. GuideIT would deliver a solution that encompasses infrastructure management, monitoring, end user support, clinical applications service desk, technical service desk, vendor management, call center technology support, and security services. This would provide Catalyst Health with the environment to deploy a new Electronic Medical Record platform which will enable greater access to clinical data for caregivers and offer improved responsiveness while improving the long-term health of their patients. Goals of the IT partnership would include:

  • Stabilization of the enterprise infrastructure through Change Management and Best Practice adoption
  • Implementation of IT roadmap and modernization that included a new EMR platform
  • Greater control of IT cost as a percentage of total revenue that would generate cost savings
  • Business stakeholders prioritize IT initiatives for greater focus on success that would drive greater business results

Why GuideIT

With GuideIT’s focus on healthcare expertise combined with its technology capabilities to manage a customers support requirements; a set of best practices and processes would be deployed to provide an improved result for Catalyst Health’s technology environment. GuideIT would operationalize a set of technology metrics to allow for greater transparency of performance, resiliency, and predictable results for the organization.

The best practice approach would create the foundation of operational excellence for Catalyst Health’s IT environment to achieve greater business results as well as on-time delivery and within budget. The underlying cost structure converted from a fixed to variable cost structure to support the scalability and allowed to realize a lower expense cost ratio as quality improved. Having access to critical skill sets that otherwise would be difficult to hire and retain would be of additional value to the organization.

The Implementation

GuideIT began with a consultative approach that included fully understanding the unique business model and support needs of Catalyst Health and its customers. Services were built around nine distinct areas: Infrastructure Management and Optimization, Service Desk, End User Field Support, Clinical Applications Support, Project Management, Vendor Management, Invoice Management, Security Enhancement, and Clinic Support.

1. Service Desk Management - Stakeholders identified the need to implement a more robust service desk that would aid in first call resolution for internal and external customers.
2. Infrastructure Management Transition - As the business grew, the need to support a larger, more diverse and scalable technology portfolio emerged. GuideIT assessed the environment and identified areas for immediate remediation. These included infrastructure standards procedures and performance management solutions were implemented to optimize the current exiting technology. As a part of this transition, GuideIT transitioned existing customer IT staff and filled identified gaps in skill sets with additional resources.
3. Expansion of Infrastructure Support - With continued growth and dependency on technology, Catalyst Health expanded the relationship to include 24x7 Service Desk, Clinical Applications Service Desk, and project management. This expanded scope allowed for greater end-to-end problem resolution.
4. ENHANCEMENTS TO SUPPORT TODAYS ENVIRONMENT - The events of the Pandemic in 2020 brought about new challenges and new solutions. In partnership with Catalyst Health, GuideIT responded with solutions for remote work, remote support, COVID 19 Hotline and most recently a Pharmacy Call Center.

The Results

  • Improved operational performance of IT systems with improved system availability
  • Seamless integration with the business departments to function as one-team
  • Improved IT solutions and responsiveness to the business
  • Improved efficiency cost ratios for the organization during a high growth period
  • Ability to support increased IT demand with a variable cost structure
Regional Health System to Accelerate Information Flow and Automate Back Office Processes through GuideIT

April 25, 2019 – Plano, TX – GuideIT today announced it signed a new contract to provide business intelligence solutions for a regional health system.

With the objectives of accelerating information flow and optimizing back-office processes, the health system launched an initiative to replace manual reporting that requires information from multiple sources, including its EMR.  GuideIT will integrate critical data sources into a common platform, apply business logic and develop the visualizations necessary to meet the health system’s management objectives.

“In healthcare, there is an opportunity to strengthen patient care and operating performance through greater and more timely access to information,” said Chuck Lyles, CEO for GuideIT. “Healthcare providers have more information about their patients and businesses than ever before.  At GuideIT, our healthcare and data specialists help healthcare providers leverage this information to produce tangible business accomplishments.”

GuideIT Digital Business solutions, which incorporate Digital Transformation, Business Intelligence and Digital Workplace, help organizations to operate more efficiently, convert ideas for creating new business value a reality, and facilitate a dynamic, anytime-anyplace business environment.

About GuideIT

GuideIT provides IT services that make technology contribute to business success. Through its consulting, managed IT, digital business, and cyber security solutions and the way it partners with customers, simplifies the complex, and inspires confidence, GuideIT utilizes technology in an industry context to enable the creation of business value and create an IT experience that delivers. Founded in 2013 and part of a heritage that dates to the industry’s founding, GuideIT has been recognized for its service quality, positive work environment and growth. More information is available at www.guideit.com.

Risk and Security Management Solutions Provider Modernizes Go-To-Market Application

A leading provider of risk and security management solutions needed to re-write and modernize its core go-to-market application.  GuideIT collaborated with the organization on defining its business requirements and developed the new application utilizing a hybrid agile/waterfall development method and continues to enhance the product leveraging agile sprint and release cycles.  The application, with its modern interface, and improved features and functionality, helped the customer expand their subscriber base by more than 95% in a 20-month period.

How to Protect Your Business From the Growing Complexity of Email-Based Security Attacks

The Threat Landscape

Organizations face a growing frequency and complexity of email-based security threats as the predominance of targeted attacks begin with an email. Advanced malware delivery, phishing and domain and identity spoofing can penetrate the primary layer of security provided as part of the email service and damage your business. With the increasing complexity of attacks, relying solely upon base security features and employee training is no longer adequate. Additionally, the types of organizations receiving these email attacks is expanding to include not only large and well-known businesses, but also small businesses because of a perception that there will be fewer security layers.

Our Approach

With GuideIT Advanced Email Protection you receive the extra security necessary to address this growing threat. We provide a service configurable to the level of protection you seek that is priced on a variable, per mailbox basis. Based on the requirements established, which encompass the level of protection, filter rules and user parameters, we implement and operate the advanced protection, while also providing you visibility into the threat environment and actions to protect your business.

How It Works

We implement a protective shield monitored by security experts in which all email traffic is routed through. Inbound messages are checked against know fraudulent and dangerous URL’s and email addresses, while attachments are scanned for malware. When an incoming email is flagged, it is blocked, quarantined and the GuideIT security team notified. We then work with your team to revise the protective rules as necessary for your business. All outbound messages are scanned to ensure that Personally Identifiable Information (PII) and Protected Health Information (PHI) do not leave the organization accidentally or maliciously.

Read next: How to Protect Your End User Devices from COVID-19 Phishing Attacks

How You Will Benefit

Through our Advanced Email Protection solution, you will realize:

  • Greater protection from advanced email threats
  • Increased visibility into the threats being experienced
  • Enhanced email encryption and data loss prevention
  • Extended protection to social media accounts
  • Better compliance and discovery readiness

Contact us to get started today.

Your Data: No Matter What You Do, It’s Your Most Valuable Asset…DATA MINING (1 of 2)

AUTHORED BY DONALD C. GILLETTE, PH.D., DATA CONSULTANT @ GUIDEIT

Last weekend I read a very interesting book entitled “The Quants: How a New Breed of Math Whizzes Conquered Wall Street and Nearly Destroyed It” by Scott Patterson. I highly recommend this as a must read for all of you that are doing Business Intelligence and especially Data Mining.

So what is Data Mining? Basically it is the practice of examining large databases in order to generate new information. Ok, let’s dig into that to understand some business value.

Let us consider the US Census. Of course by law, it is done every ten years and produces petabytes (1 petabyte is one quadrillion bytes of data), which are crammed full of facts that are important to almost anyone that is doing data mining for almost any consumer based product, service, etc. Quick sidebar and promo…in part 2 of this micro series, I will share where databases like the census and others can be accessed to help make your data mining exercise valuable.

So if I was asked by the marketing department to help them predict how much to spend on a new advertising campaign to sell a new health care product that enhances existing dental benefits of those already in qualified dental plans, I would have a need for data mining. With this criteria, I would, for example, query the average commute time of people over 16 in the state of Texas. It is 25 minutes. We would now have a cornerstone insight to work from. This of course narrows the age group to those receiving incomes and not on Social Security and Medicare. In an effort to validate a possible conclusion, we run a secondary query on additional demographic criteria and learn that a 25 minute commute volume count doesn’t change. Yet we learn that 35% of the people belong to one particular minority segment.

I pass this information to the Marketing Department and they now have the basis to understand how much they should pay for a statewide marketing campaign to promote their new product, when to run the campaign, and what channels and platforms to use.

DATA MINING, can’t live without it. Next week we’ll cover how and where to mine.

Servant Leadership…How to Build Trust in The Midst of Turmoil

AUTHORED BY Ron hill, Vice president, sales @ GUIDEIT

It was a sunny winter day and I had just started as the Client Executive at one of the largest accounts in the company. Little did I know, clouds were about to roll in. The CIO walked into my office and sat down with a big sigh. She communicated that they were ending our agreement and moving to a different service provider. We had 12 months. It required immediate action by our company, implications in the market would ensue, and an environment of uncertainty was born for our team of more than 700 people providing service support.

This was no time to defend or accept defeat. We had to act. Our account leadership team readied the organization for the work ahead and imminent loss. We formally announced the situation to the organization. There were tears and some were even distraught. Our leadership team had not faced this situation before. The next 12 months looked daunting.

Regardless, it was time to lead. We created a “save” strategy and stepped into action beginning with daily team meetings. We invested time prioritizing and sharing action items and implications about information systems, project management, and the business process services. It was our job to operate with excellence, despite the past. It was our job to honorably communicate knowledge to the incoming service provider. One of the outcomes of our work was a weekly email outlining past week accomplishments and expectations for the next week. The email often included a blend of personal stories and team success. We even came up with a catchy brand for the email…Truth of the Matter. It turned out to be a key vehicle that kept our teams bonded and informed. Our leadership team used it as a vehicle to help maintain trust with the team.

During our work, we also began to rebuild trust with the customer as we continued to support them in all phases of their operation. Because of our leadership team’s commitment to service, transparency, and integrity, the delivery team was inspired in achieving many great milestones during those 12 months. We were instrumental in helping our customer achieve multiple business awards including a US News and World Report top ranking. We also found ways to achieve goals that established new trends in their industry. Before we knew it, the year had come and gone and we were still there.

Reflecting back, since that dark day when the CIO informed me that we were done, it was actually the beginning of more than a decade-long relationship. The team had accomplished an improbable feat. In the end, it was the focus of our leadership to come together with a single message and act with transparency…letting their guard down to build an environment of trust with the team and with the customer. This enabled all of us to focus on meeting the goals of the customer, together.

Your Data: No Matter What You Do, It’s Your Most Valuable Asset (Part 2 of 2)

AUTHORED BY DONALD C. GILLETTE, PH.D., DATA CONSULTANT @ GUIDEIT

Last week we declared, “If you don’t embrace the fact that your business’ greatest asset is your data, not what you manufacture, sell or any other revenue-generating exercise, you will not exist in five years. That’s right…five years”.

This week, I’m introducing a perspective on leveraging Big Data to create tangible asset value. In the world of Big Data, structure is undefined and management tools vary greatly across both open source and proprietary…each requiring a set of skills unique from the world of relational or hierarchical data. To appreciate the sheer mass of the word “big”, we are talking about daily feeds of 45 terabytes a day from some social media sites. Some of the users of this data have nick names like “Quants” and they use tools called Hadoop, MapReduce, GridGain, HPCC and Storm. It’s a crazy scene out there!

Ok, so the world of big data is a crazy scene. How do we dig in and extract value from it?  In working with a customer recently, we set an objective to leverage Big Data to help launch a new consumer product. In the old days, we would assemble a survey team, form a focus group and make decisions based on a very small sample of opinions…hoping to launch the product with success. Today we access, analyze, and filter multiple data sources on people, geography, and buying patterns to understand the highest probability store locations for a successful launch. All these data sources exist in various electronic formats today and are available through delivery sources like Amazon Web Services (AWS) and others.

In our case, after processing one petabyte (1000 terabytes) of data we enabled the following business decisions…

  • Focused our target launch areas to five zip codes where families have an average age of children from two to four years old with a good saturation of grocery stores and an above average median income
  • Initiated a marketing campaign including social media centered on moms, TV media centered on cartoon shows
  • Offered product placement incentives for stores focusing on the right shelf placement for moms and children.

While moms are the buyers, children are influencers when in the store. In this case, for this product, lower shelves showed a higher purchasing probability because of visibility for children to make the connection to the advertising and “help” mom make the decision to buy.

Conclusion? The dataset is now archived as a case study and the team is repeating this exercise in other regional geographic areas. Sales can now be compared between areas enabling more prudent and valuable business decisions. Leveraging Big Data delivered asset value by increasing profitability, not based on the product but rather on the use of data about the product. What stories can you share about leveraging Big Data? Post them or ask questions in the comments section.

Your Data: No Matter What You Do, It’s Your Most Valuable Asset (Part 1)

Authored by Donald C. Gillette, Ph.D., Data Consultant @ GuideIT

If you don’t embrace the fact that your business’ greatest asset is your data, not what you manufacture, sell or any other revenue-generating exercise, you will not exist in five years.  That’s right…five years.

Not so sure that’s true? Ask entertainment giant Caesars Entertainment Corp. their perspective. They recently filed Chapter 11 and have learned that their data is what creditors value. (Wall Street Journal, March 19, 2015, Prize in Caesars Fight: Data on Players. Customer loyalty program is valued at $1 billion by creditors). The data intelligence of their customers is worth more than any of their other assets including Real Estate.

Before working to prove this seemingly bold statement, let’s take a look back to capture some much needed perspective about data.

The Mainframe

Space and resources were expensive and systems were designed and implemented by professionals who had a good knowledge of the enterprise and its needs.  Additionally, very structured process(s) existed to develop systems and information. All this investment and structure was often considered a bottleneck and an impediment to progress.  Critical information such as a customer file, or purchasing history, was stored in a single, protected location. Mainframe Business Intelligence offerings were report-writing tools like the Mark IV. Programmers and some business users were able to pull basic reports.  However, very little data delivered intelligence like customer buying habits.

Enter the Spreadsheet

With the introduction of the PC, Lotus 123 soon arrived in the market.  We finally had a tool that could represent data in a two dimensional (2D) format enabling the connection of valuable data to static business information. Some actionable information was developed resulting in better business decisions. This opened up a whole new world to what we now call business intelligence. Yet, connecting the right data points was a cumbersome, manual process. Windows entered the scene and with it, the market shifted from Lotus to Excel carrying over similar functionality and challenges.

Client Server World Emerges

As client servers emerged in the marketplace, data was much more accessible. It was also easier to connect together, relative to the past, providing stakeholders real business intelligence and its value to the enterprise. With tools like Cognos, Teradata, and Netezza in play, data moved from 2D to 3D presentation. Microsoft also entered the marketplace with SQL Server. All this change actually flipped the challenges of the Mainframe era.  Instead of bottlenecked data that’s hard to retrieve, version creep had entered the fold…multiple versions of similar information in multiple locations. What’s the source of truth?

Tune in next week as we provide support for data being your most valuable asset with a perspective and case study analysis of a Business Intelligence model that uses all technology platforms and delivers the results to your smartphone.

Reduce IT Spending… Approach Rationalization The Right Way

AUTHORED BY Frank T. avignone, IV Transformation executive @ GUIDEIT

Meaningful Use, Health Information Exchange, and Predictive Analytics are a few phrases that keep hospital CFOs awake at night. As the hospital market prepares for another shift in reimbursement, including a 1.3% cut in reimbursement for 2015 Medicare and an additional 75% cut in DSH payments by 2019, the health system CFO has innumerable financial challenges in maintaining a healthy balance sheet. Add to these concerns the looming ICD-10 transition expense, consolidation (including the aggregation of physicians and post acute care providers), and the future is daunting for the chief financial officer and other executive stakeholders.

There is a bright spot for the health system CFO with respect to bringing sanity to the healthcare IT spend on the balance sheet. It just requires a little courage. The majority of US health systems maintain an IT portfolio that supports redundant functions across the enterprise. In a consolidation environment where M&A activity is increasing, this can result in $70,000-100,000 per bed to integrate disparate clinical and business systems. A simple effort of technology portfolio rationalization can reduce IT spend in any environment as much as 60% capex and 30% opex. The effectiveness of application portfolio rationalization and the impact on the health system, in terms of cost savings, revenue generation, and meeting the needs of clinical business users, depends on the right approach.

While traditional application rationalization projects will yield positive, quantifiable results, typically they do not take into account “Information Rationalization” that will negatively impact value and time to care delivery. The most important aspect of an application portfolio is not the application itself, but rather the information trapped within the application stack. Changing perspective will increase the value of any rationalization effort. Releasing the information contained within legacy applications is the critical focus. Organizations can accomplish this by leveraging an enterprise service bus to overlay the information rich interface engine architecture leveraging existing information without tired approach of "rip and replace" usually offered by software and IT vendors. Once information is captured within the enterprise bus, it can be analyzed and consolidated into events and used as real-time streaming information to better understand the real value of the data and it’s origins. While rationalization capex/opex cost reductions are the underlying principals of the APR effort, the health system CFO and CIO can work together to create additional value. Simply by releasing the information, and in some cases virtualizing their associated application’s logic, the health care enterprise can preserve the value and improve access to the information trapped within. This approach will allow for rationalization discovered by traditional disciplines and provide a single uniform source of information and infrastructure to rapidly enable new business solutions dynamically and rapidly.

The time has come for the health system CFO and CIO to work hand in hand to accurately understand and align business needs with an agile information technology stack that promotes boundary-less access to information independent of the siloes of applications, securely and dependably.

Service Desk Selection: 3 Checkpoints

AUTHORED BY SCOTT TEEL, MANAGED SERVICES EXECUTIVE @ GUIDEIT

Today’s Service Desk continues to evolve with the technology that it supports for the individual end user community. Granted, it begins with a single seat and phone. From phone calls to email, to self-service customer web portals, chat, and social media…the ways in which we engage help has changed and scaled dramatically.

All sources of customer engagement must be tracked and reported in a single ticketing system to ensure quality of service through measurable analysis of performance. And a strong value proposition is a must. As you or someone in your organization considers that value proposition, here are 3 checkpoints for selecting a Service Desk solution:

  1. Partnership. Service Desk capabilities are often labeled a commodity offering due offshore capabilities. All providers of these services are battling and driving for the lowest cost solution without ‘listening and understanding’ individual customer requirements. If treated like a commodity, in most cases, the service becomes a bad investment. The right partner will assist in offering the right solution by listening and understanding the demands and risks of your needs. Then they can apply the right automation, tools and utilities to make service flourish and mitigate risk.
  2. Pricing. Yes there are many variables to drive costs up or down a Service Desk offering…from onshore to offshore, languages, first call resolution, ticketing, tool types, reporting, IT, application support, and so on. Regardless, service providers want to fill their excess capacity. Invest the time to understand their situation. Asking the right questions about their capabilities and willingness to be flexible (and ability to execute within such flexibility using a defined methodology), you can find great value through negotiating the right balance of service and pricing.
  3. Metrics. You must ensure that your partner has the available tools to establish a baseline for delivery for this service, while following ITIL processes that enable Continual Service Improvements (CSI) throughout the relationship. The right tools include the availability and performance of the PBX / ACD system, the ticketing system and any additional automated processes to show the CSI. The right reporting is available weekly, monthly and must be meaningfully measurable.

In summary, evaluate your options and ask a lot of questions about their situation. You will develop the leverage you need to achieve the right service with maximum value. Approach your evaluation this way and you will increase the probability of partnering with a group that serves as an extension of your team.

Balancing Creativity and Efficiency in IT Service Management (ITSM) Environments: 3 Best Practices

AUTHORED BY Scott teel, Managed services EXECUTIVE @ GUIDEIT

Although many IT service managers enjoy the thrill of a good chase (identifying the problem, developing possible solutions, and then testing those theories) leveraging such creativity of those outside-the-box thinkers can be a challenge.  Most engineers and administrators base their problem solving on their own set of experiences and training.  While this is part of the reason you hire them, it can sometimes limit their problem solving efficiency and overall performance when measured against the objectives of the business.  While some IT problems may be easily identified and solved, others require a much more “detective-like” approach and more creativity. So how does a leader balance creativity and efficiency in an ITSM environment?

Here are 3 best practices to ensure problem solving remains streamlined while still fostering creativity…

  1. Collaborate.  Infrastructure problems are complex and can span a multitude of functional areas. One size does not fit all solutions is not the normal in IT today and most solutions today can coexist or integrate into the foundation of your ITSM solution. So, foster a proactive organized collaboration environment to enable open sharing across domain expertise.
  2. Speak the same language and keep it simple.  Problems should be solved with a balance of tactical and strategic insight. Ensure the final solution is solved by taking small, easy to understand steps and milestones that achieve the overall business goals with measured results. Make sure your IT specialists are on the same page by providing a clear understanding of the problem, possible causes, and possible outcomes.
  3. Bring in help if needed.  Sometimes the right answer will come from outside your group. Don’t be afraid to consider this option.

Creativity can be balanced with efficiency by fostering an environment where ideas and solutions can be freely shared with an organized and collaborative approach. Join us next week for our next microblog post!

Fedora 20: Firefox Reports Flash as Vulnerable

This problem starts with Firefox reporting that your flash-plugin is out of date.  This report looks like this and disables all-flash.

After this, we will take a look Mozilla’s Plugin Check to see what it thinks is going on.

Now here we can see that version 11.2.202.440 is vulnerable.  We will then check about:plugins to see if it agrees.

Again this is also reporting 11.2.202.440, so there must be a problem, but it also tells us that there is an update available.  Now I run regular yum updates on this machine, and I actually noticed flash-plugin was updated just a few hours prior to seeing this alert.  So lets check the installed version.

[root@ltmmattoon matthew]# yum info flash-plugin<br />

Loaded plugins: langpacks, refresh-packagekit<br />

Installed Packages<br />

Name : flash-plugin<br />

Arch : x86_64<br />

Version : 11.2.202.442<br />

Release : release<br />

Size : 19 M<br />

Repo : installed<br />

From repo : adobe-flashplayer<br />

Summary : Adobe Flash Player 11.2<br />

URL : http://www.adobe.com/downloads/<br />

License : Commercial<br />

Description : Adobe Flash Plugin 11.2.202.442<br />

: Fully Supported: Mozilla SeaMonkey 1.0+, Firefox 1.5+, Mozilla<br />

: 1.7.13+

Interesting 11.2.202.442, which is higher than what Firefox is reporting.  Of course Firefox has been rebooted, but lets do it again just to make sure.

Now to fix it.

$ pwd<br />

/home/matthew/.mozilla/firefox/cls7wbvm.default<br />

$ mv pluginreg.dat pluginreg.dat.bak

Restart Firefox and it will collect new data on all of its plugins, and about:plugins will start reporting the correct version.

IT Project Management…Which Stakeholder Are You?

Authored by Guy Wolf, transformation executive @ guideit

So much material has already been developed and published about what a PMO is, what it can be, and how to set one up.  Much of the material is banal. For those of you who are fans of Monty Python, the “How to do it” skit comes to mind. This particular post focuses on something else: a perspective on stakeholder roles and the importance of clear objectives.

Often PMOs get started for the wrong reasons, putting a solution in place before fully understanding the primary objective. Some promote focusing on achieving a level of maturity first. Others propose starting at the project level, and as you demonstrate proficiency, moving “up” to the program, then portfolio level.  The problem with these approaches is that the “what” is confused for the “how.”

The best practice for an effective PMO is to develop a list of business objectives and customers that will be served with a business case that illustrates why implementing a PMO is better than the alternatives. The PMO, however one defines it, is not a project.  It is a business unit.  Therefore, just like Human Resources, Marketing, or Facilities, it must justify its existence by improving the lives of its customers.  What that means in your situation, and how to go about it, will be different from others. Below are some perspectives by role.

Customer/CIO:  Nearly all business improvement initiatives have a large component of Information Technology (IT) at their core. Frequently, IT is the single largest component, and implementation is often on the critical path to achieving the desired end state. Additionally, IT departments often suffer from a practice of project management that excludes all other departments in an enterprise.  This disconnect can create a misalignment in critical path objectives. Unfortunately the CIO too often holds the bag at the end if the broader strategy and governance are not easily accessible. What the CIO needs is clear governance or a seat at the strategy table to manage a complex, inter-related portfolio of initiatives that will deliver success to the company.

CFO: CFO’s have an expectation to forecast and manage capital and operating expenses.  As enterprise business-change initiatives often carry high risk, a CFO has a strong desire to assure that processes are in place to alert leadership in advance of potential variances and manage expenses to the forecasted budget, even if it was set long before the project requirements were fully known.

CEO: charged with the overall success of the organization, the CEO must manage many competing priorities among multiple departments. Managing a global perspective includes oversight of limited availability in capital investment resources spread across multiple strategic priorities.  To that end, CEO’s require some method to weigh the various investment options and to select the combination that has the highest chance of achieving the overall organizational objectives.

Business Unit Leaders (Sponsors):  charged with growing and improving their areas of responsibility. They have a need for a well-defined process to engage IT resources in helping them prioritize projects and source them with the right resources. Furthermore, they need visibility to relevant status reporting with opportunity to make business decisions to navigate a successful result.

Steering Committee: responsible for weighing the costs, risks and benefits of multiple project options, often without certainty of the inputs.  They require a method that provides as much information as possible regarding objectives, resources, and stakeholders.  For projects underway, visibility to insights through reporting enables better decision-making throughout the process.

Project Managers: need support for collecting status data enabling focus on day-day decision making and management, not task-driven administration; access to resources across multiple matrixed towers in the organization; access to key stakeholders to make decisions and allow them to keep projects on track.

Team members: require easy data collection that helps reporting status and doesn't take a lot of time to use; respect for a balance of time to support operations as well as project demands from multiple project manager stakeholders.

Choosing objectives means limiting some, and eliminating others. Prioritization isn’t easy but it’s necessary to increase the probability of extending the long-term value of your projects. There are some great templates that can be used in building and operating a PMO to improve the quality and speed with which we achieve our goals. If you would like more information, drop a comment or email me at guy.wolf@guideit.com. I welcome your feedback, as we strive to do technology right, and do projects right.

BlackBerry Z30: No Update to 10.3.1

I have a BlackBerry Z30 (STA100-5) which I was excited to update to the latest release of BB10, which was announced on February 9, 2015 (link).  However, when I was attempting to install the update over the air, it kept telling me that I was already on the latest version.  This was obviously incorrect (I was on 10.2.1.3062 which is the latest prior to 10.3.1).

Here are the things that I tried that were unsuccessful.

  • Reboots (including power off).
  • Removing the SIM and using wifi only.
  • Waiting.

Now eventually I was able to get the update installed on the advice of a friend who already had the update.

  1. Turn of Mobile Network.
  2. Power Off.
  3. Remove SIM.
  4. Power On.
  5. Check for update.

Now at this point, I had something much different, it took significantly longer to check for the updates, which of course got me excited thinking it must actually be doing something.  Twenty minutes later I realized I must have been wrong, and killed the Settings app. Then checked for the update again, and it immediately found it and I was able to start the install.  Once I had the update and it was in the process of downloading I re-inserted my SIM and enabled mobile networking.

Obviously there is the possibility for streamlining this procedure (do you actually need to disable mobile networking and remove the SIM being the most obvious one), but since I didn’t have a box full of these devices with this problem I was unable to optimize the procedure, so feel free to tinker, but if nothing seems to work then feel free to give the above a go and see if you have the same experience.

Also important to note, I purchased my BlackBerry directly from the BlackBerry Store, if you purchased it from a carrier then you might have different mileage based on their approvals.

Physicians, Clinicians: Thank You

Authored by Mark Johnson, VP Managed Services @ GuideIT

For anyone who has spent the bulk of their career in healthcare IT, a venture into an in/out-patient setting for one’s own health is always an interesting experience.  Throughout the process you can’t help but say – “it’s 2015 and we’re still doing this?”  For me it was in preparation for that first (dreaded) “over 50 procedure”.  It started with far too much paperwork, some of it redundant, and some of it collecting information I had already provided in their portal (sadly with no linkage to my HealthVault account).  Then I arrived in the clinic and was not only faced with more paperwork, but music that was playing way too loud on a morning that I was already grumpy from not being able to eat the day prior.

But then, everything changed.  Once I left the waiting room, every clinician I interacted with was simply outstanding.  From the prep nurse, to the anesthesiologist, to the doctor himself.  They actually seemed to really and truly enjoy their work!  And their positive approach to delivery of care translated directly to an extremely positive patient-clinician interaction.

So while there’s plenty of time to talk about how to better leverage IT in the delivery of care, for me today this is simply a “hat’s off and well done” to the people that really make such a tremendous difference in our lives – clinicians and their staff.
Oh, and if you’re wondering – it turns out it was a very good thing I had this taken care of.  So listen to your physician.

MutliSourcing…The Right IT Governance for Maximizing Business Outcomes

Authored by Jeff Smith, VP Business Development @ GuideIT

A national healthcare provider was ready to move from multiple PBX systems to a VOIP-centric model for their communications…the transition, one piece of a broader multi-source IT strategy. Simple enough, right? Not exactly. This transition was a monster…500 locations and more than 1100 buildings. Additionally, the provider cares for patients, the majority of whom are in some form of acute need. Sure, any business requires clean execution in a project of this magnitude. But few businesses have the sole mission of caring for the acute health needs of their customers like healthcare providers do for their patients.

Truly lots of moving parts in this story…a story representing one part of the bigger picture. A critical attribute of this provider’s success was ensuring the right IT Governance function encompassing their multi-source strategy.

So what is the right governance? According to Gartner, governance is the decision framework and process by which enterprises make investment decisions and drive business value. Take that one step further applying IT and the definition is, “IT Governance (ITG) is the processes that ensure the effective and efficient use of IT in enabling an organization to achieve its goals. IT demand governance (ITDG—what IT should work on) is the process by which organizations ensure the effective evaluation, selection, prioritization, and funding of competing IT investments; oversee their implementation; and extract business benefits.”

Now consider “why” the right IT Governance is critical in a multi-sourcing environment. When multiple vendor partners serve in support of the broader business mission, the opportunity to optimize outcomes for the business is huge. And so is the risk. The opportunity is there because the organization can leverage the specialization of subject matter experts necessary in a highly complex IT environment driven by growing business demands. One partner specializes in apps, another in cloud infrastructure, another in mobility, and so on. They all bring optimal value in areas critical to support the business…thus the core value of multi-sourcing.

Therein lies the risk too. Without the right governance model, no clear accountability exists to ensure open collaboration and visibility across specialists. Specialists will act in silos. And we all know how silos hurt business. Simply put, the “why” for the right governance is to optimize outcomes through maximizing specialization while minimizing the risk of “silo-creep”. The right governance closes the gap between what IT departments think the business requires and what the business thinks the IT department is able to deliver. Organizations need to have a better understanding of the value delivered by IT and the multiple vendor partners leveraged…some of whom are ushered in through business stakeholders.

Because organizations are relying more and more on new technology, executive leadership must be more aware of critical IT risks and how they are being managed. Take for example our communications transition story from earlier…if there is a lack of clarity and transparency when making such a significant IT decision, the transition project may stall or fail, thereby putting the business at risk and, in this case, patients lives at risk. That has a crippling impact on the broader business and future considerations for the right new technologies to be leveraged.

Conclusion: the right IT Governance is critical to optimizing business outcomes

Perot Back in IT Services

MAKES MAJOR INVESTMENT IN GUIDEIT

Plano, TX – Monday, February 2, 2015 – GuideIT, a Plano-based provider of technology optimization services, today announced that the Perot family has increased their investment in the company to become its largest shareholder. GuideIT, newly branded as A Perot Company, welcomes Ross Perot, Jr. as a member of the board.

Corporate portrait session with Ross Perot, Ross Perot Jr, and the founders/executive from GuideIT; taken in the front foyer of Ross Perot Sr's office in Plano Texas

Back Row: Chuck Lyles, CEO  |  John Furniss, Vice President  |  Scott Barnes, Board Member  |  Tim Morris, Vice President  |  John Lyon, CFO

Front Row: Ross Perot, Jr., Board Member  |  H Ross Perot  |  Russell Freeman, Board Member

“Through EDS and Perot Systems, my family has played a major role in shaping the IT services industry,” said Perot, Jr. “GuideIT has fostered a great entrepreneurial spirit and a strong commitment in delivering customer results in a rapidly growing organization. I look forward to building a great company.”

GuideIT has a suite of solutions and an engagement approach tailored for today’s business environment and technology issues.  The company’s revenue more than tripled in 2014.

“We are building a next-generation services company based on timeless services industry principles,” said Chuck Lyles, CEO.  “We are honored to be associated with the Perot family who are known for their commitment to excellent customer service, outstanding business management and the highest ethical standards.”

GuideIT offers services that help customers optimize their technology environments. Primary offerings include consultative services such as technology vendor management, project management, enterprise assessments, and a suite of deployment and managed services. By deploying these solutions in a collaborative, flexible engagement approach, customers achieve tangible business results.

About GuideIT

As a provider of technology optimization services, we believe doing technology right is the difference between leaders and the rest. We help companies lead.
Through a collaborative and easy-to-do-business-with approach, the company helps customers align IT operations in meeting their strategic business needs, better govern and manage the cost of IT, and effectively navigate change in technology.

Media Contact

James Fuller
Public Strategies, Inc.
214-613-0028
jfuller@pstrategies.com

MultiSourcing…A Critical Strategy for Aligning IT with the Business Mission

Authored by Chuck Lyles, CEO @ GuideIT

A growing trend in IT Services is the implementation of strategies designed to migrate IT operations from a single provider to an environment leveraging multiple specialty companies. As the market matures, this trend can better enable CIO's in executing strategically, driving greater effectiveness and efficiency in operations.

So what are the high level benefits and outcomes of multi-sourcing?

The right multi-sourcing strategy allows IT teams to dilute risk with partners who specialize in a particular discipline or technology.  Additionally, this type of strategy facilitates greater flexibility enabling the internal agility necessary for adapting to changing priorities…a consistent theme in supporting the broader business mission. Specialized firms are more responsive to customer needs, more motivated to consistently drive innovation, and better at implementing disruptive technologies that drive effectiveness through more automation.

What are some of the challenges and potential pitfalls?

Accountability. Yes multi-sourcing is a critical approach for leveraging IT in supporting the needs of the business. Yet to be truly strategic in this approach, leaders must require accountability. Fail to create an environment of accountability in execution, and the strategy isn’t worth the paper it’s written on. Another challenge…Simplicity. A “multi” approach by definition, yet absent of sound strategy, has the potential to introduce complexity and silos into your environment. So what’ the answer for ensuring accountability and simplicity in your multi-sourcing approach? Clear purpose, aligned incentives, and shared values. Easy to say; tough to do. More on this in future posts.

What’s your perspective on multi-source strategies?

SPARC Logical Domains: Alternate Service Domains Part 3

In Part One of this series, we went through the initial configuration of our Logical Domain hypervisor and took some time to explain the process of mapping out the PCI Root Complexes, so that we would be able to effectively split them between the primary and an alternate domain.

In Part Two of this series we took the information from Part One and split out our PCI Root Complexes and we configured and installed an alternate domain.  We were also able to reboot the primary domain without impacting the operation of the alternate domain.

In Part Three (this article) we will be creating redundant virtual services as well as some guests that will use the redundant services that we created, and will go through some testing to see the capabilities of this architecture.  At the end of this article, we will be able to reboot either the primary or alternate domain without it having an impact on any of the running guests.

Create Redundant Virtual Services

So at this point, we have a fully independent I/O Domain named alternate.  This is great for some use cases, however, if we don’t enable it to be a Service Domain as well then we won’t be able to extend that independence to our Guest Domains.  This will require that we create Virtual Services for each of these critical components of a domain.

We previously created a primary-vds0, and that will suit us just fine, however, we will also need an alternate-vds0.

# ldm add-vdiskserver primary-vds0 primary<br />

# ldm add-vdiskserver alternate-vds0 alternate

We did not provision any Virtual Switches previously as we had no need of it since we had handed out physical NICs directly to primary and alternate.  Here we will create both primary-vsw0 and alternate-vsw0.

# ldm add-vswitch net-dev=net0 primary-vsw0 primary<br />

# ldm add-vswitch net-dev=net0 alternate-vsw0 alternate

To connect to the console of LDOMs we must have a virtual console concentrator.  This should have been setup previously to install the alternate domain.

# ldm add-vconscon port-range=5000-5100 primary-vcc0 primary

Now let’s save our setting since we have setup the services.

# ldm add-config redundant-virt-services

With our progress saved we can move on.

Creating Multipath Storage Devices

In order to utilize the redundancy of LDM, we will need to create redundant virtual disk devices.  The key difference here is that we will need to specify a mpgroup.

# ldm add-vdsdev mpgroup=san01-fc primary-backend ldm1-disk0@primary-vds0

And now the same device, using the alternate domain.

# ldm add-vdsdev mpgroup=san01-fc alternate-backend ldm1-disk0@alternate-vds0

Now another thing to notice, is that when using multiple protocols on the same SAN it is important to have a different mpgroup, this is because you can have failures in the interconnect layers, that don’t affect other protocols.  Case in point a failure of the FC fabric wouldn’t affect the availability of NFS services. So those failures need to be monitored separately. The jury is still out where the line should be drawn in terms of what goes into a single mpgroup.  As I was testing live migration it seems to be more effective to use the VM and the protocol as the boundary, as it checks the mpgroup for a number of members on both sides as part of its check. So, in this case, it might be ldm1-fc and ldm1-nfs.

# ldm add-vdsdev mpgroup=san01-nfs primary-backend ldm1-disk1@primary-vds0

Again the same device for the alternate domain.

# ldm add-vdsdev mpgroup=san01-nfs alternate-backend ldm1-disk1@alternate-vds0

Now we are ready to support the domain, next, we will create the domain and assign the disk resources.  Important to note, that we do not assign BOTH disk resources, only the primary. The mpgroup will take care of the redundancy.

# ldm add-domain ldm1<br />

# ldm set-vcpu 16 ldm1<br />

# ldm set-memory 16G ldm1<br />

# ldm add-vdisk disk0 ldm1-disk0@primary-vds0 ldm1

In the next section we will create some redundant network interfaces.

Creating Redundant Guest Networking

Redundant networking is really not any different than non-redundant networking, we simply create two VNICs, one on  primary-vsw0 and the other on alternate-vsw0. Once provisioned we create an IPMP interface inside of the guest. I theory you could use DLMP as well, though I haven’t tested this option.

# ldm add-vnet vnet0 primary-vsw0 ldm1<br />

# ldm add-vnet vnet1 alternate-vsw0 ldm1

Inside of the guest we now need to bind, start, and install.

# ldm bind ldm1<br />

# ldm start ldm1

I am assuming that you know how to install Solaris, as you already would have done so at least twice to get to this point.  Now time to configure networking. If you need help with configuring networking see the following articles.

Solaris 11: Network Configuration Basics

Solaris 11: Network Configuration Advanced

ldm1# ipadm create-ip net0<br />

ldm1# ipadm create-ip net1<br />

ldm1# ipadm create-ipmp -i net0 -i net1 ipmp0<br />

ldm1# ipadm create-addr -T static -a 192.168.1.11/24 ipmp0/v4<br />

ldm1# route -p add default 192.168.1.1

Now at this point, you will have all the pieces in place to have redundant guests.  Now it is time to do some rolling reboots of the primary and alternate domains and ensure your VM stays up and running.  Inside the guest, the only thing that is amiss is you will see ipmp members go into a failed state, and then come back up as the services are restored.

One final note.  From the ilom if you issue a -> stop SYS this will shutdown the physical hardware, which is both domains and all guests.

SPARC Logical Domains: Alternate Service Domains Part 2

In Part One of this series, we went through the initial configuration of our Logical Domain hypervisor and took some time to explain the process of mapping out the PCI Root Complexes, so that we would be able to effectively split them between the primary and an alternate domain.

In Part Two (this article) we are going to take that information and split out our PCI Root Complexes and configure and install an alternate domain.  At the end of this article, you will be able to reboot the primary domain without impacting the operation of the alternate domain.

In Part Three we will be creating redundant virtual services as well as some guests that will use the redundant services that we created, and will go through some testing to see the capabilities of this architecture.

Remove PCI Roots From Primary

The changes that we need to make will require that we put LDM into dynamic reconfiguration mode, which will require a reboot to implement the changes.  This mode also prevents further changes to other domains.

# ldm start-reconf primary<br />

Initiating a delayed reconfiguration operation on the primary domain.<br />

All configuration changes for other domains are disabled until the primary<br />

domain reboots, at which time the new configuration for the primary domain<br />

will also take effect.

Now we remove the unneeded PCI Roots from the primary domain, this will allow us to assign them to the alternate domain.

# ldm remove-io pci_1 primary<br />

------------------------------------------------------------------------------<br />

Notice: The primary domain is in the process of a delayed reconfiguration.<br />

Any changes made to the primary domain will only take effect after it reboots.<br />

------------------------------------------------------------------------------<br />

# ldm remove-io pci_3 primary<br />

------------------------------------------------------------------------------<br />

Notice: The primary domain is in the process of a delayed reconfiguration.<br />

Any changes made to the primary domain will only take effect after it reboots.<br />

------------------------------------------------------------------------------

Lets save our configuration.

# ldm add-config reduced-io

Now a reboot to make the configuration active.

# reboot

When it comes back up we should see the PCI Roots unassigned.

Create Alternate Domain

Now we can create our alternate domain and assign it some resources.

# ldm add-domain alternate<br />

# ldm set-vcpu 16 alternate<br />

# ldm set-memory 16G alternate

We have set this with 2 cores and 16GB of RAM.  Your sizing will depend on your use case.

Add PCI Devices to Alternate Domain

We are assigning pci_1 and pci_3 to the alternate domain, this will have direct access to two of the on-board NICs, two of the disks, and half of the PCI slots.  It also will inherit the CDROM as well as the USB controller.

Also really quick I just wanted to point this out quickly.  The disks are not split evenly, pci_0 has 4 disks, while pci_3 only has two.  So that said if your configuration included 6 disks then I would recommend using the third and fourth in the primary as non-redundant storage pool, perhaps to be used to stage firmware and such for patching.  But the bottom line is that you need to purchase the hardware with 4 drives minimum.

# ldm add-io pci_1 alternate<br />

# ldm add-io pci_3 alternate

Here we have NICs and disks on our alternate domain, now we just need something to boot from and we can get the install going.

Lets save our config before moving on.

# ldm add-config alternate-domain

With the config saved we can move on to the next steps.

Install Alternate Domain

We should still have our CD in from the install of the primary domain.  After switching the PCI Root Complexes the CD drive will be presented to the alternate domain (as it is attached to pci_3).

First thing to do is bind our domain.

# ldm bind alternate

Then we need to start the domain.

# ldm start alternate

We need to do is determine what port telnet is listening on for this particular domain.  In our case we can see it is 5000.

# ldm ls<br />

NAME STATE FLAGS CONS VCPU MEMORY UTIL NORM UPTIME<br />

primary active -n-cv- UART 16 16G 0.2% 0.2% 17h 32m<br />

alternate active -n--v- 5000 16 16G 0.0% 0.0% 17h 45m

When using these various consoles you always need to be attentive to the escape sequence, which in the case of telnet it is ^], which is “CTRL” + “]” once we have determined where we can telnet to, then we can start the connection.  Also important to note. You will see ::1: Connection refused. This is because we are connecting to localhost, if you don’t want to see that error connect to 127.0.0.1 (which is the IPv4 local address).

# telnet localhost 5000<br />

Trying ::1...<br />

telnet: connect to address ::1: Connection refused<br />

Trying 127.0.0.1...<br />

Connected to AK00176306.<br />

Escape character is '^]'.</p>

<p>Connecting to console &quot;alternate&quot; in group &quot;alternate&quot; ....<br />

Press ~? for control options ..</p>

<p>telnet&gt; quit<br />

Connection to AK00176306 closed.

I will let you go through the install on your own, but I am assuming that you know how to install the OS itself.

Now let's save our config, so that we don’t lose our progress.

# ldm add-config alternate-domain-config

At this point, if we have done everything correctly, we can reboot the primary domain without disrupting service to the alternate domain.  Doing pings during a reboot will show illustrate where we are in the build. Of course, you would have to have networking configured on the alternate domain, and don’t forget the simple stuff like mirroring your rpool and such, it would be a pity to go to all this trouble to not have a basic level of redundancy such as mirrored disks.

Test Redundancy

At this point, the alternate and the primary domain are completely independent.  To validate this I recommend setting up a ping to both the primary and the alternate domain and rebooting the primary.  If done correctly then you will not lose any pings to the alternate domain. Keep in mind that while the primary is down you will not be able to utilize the “control domain” in other words the only one which can configure and start/stop other domains.

SPARC Logical Domains: Alternate Service Domains Part 1

In this series, we will be going over configuring alternate I/O and Service domains, with the goal of increasing the serviceability the SPARC T-Series servers without impacting other domains on the hypervisor.  Essentially enabling rolling maintenance without having to rely on live migration or downtime. It is important to note, that this is not a cure-all, for example, base firmware updates would still be interruptive, however minor firmware such as disk and I/O cards only should be able to be rolled.

In Part One we will go through the initial Logical Domain configuration, as well as mapping out the devices we have and if they will belong in the primary or the alternate domain.

In Part Two we will go through the process of creating the alternate domain and assigning the devices to it, thus making it independent of the primary domain.

In Part Three we will create redundant services to support our Logical Domains as well as create a test Logical Domain to utilize these services.

Initial Logical Domain Configuration

I am going to assume that your configuration is currently at the factory default and that you like me are using Solaris 11.2 on the hypervisor.

# ldm ls<br />

NAME STATE FLAGS CONS VCPU MEMORY UTIL NORM UPTIME<br />

primary active -n-cv- UART 256 511G 0.4% 0.3% 6h 24m

The first thing we need to do is remove some of the resources from the primary domain, so that we are able to assign them to other domains.  Since the primary domain is currently active and using these resources we will enable delayed reconfiguration mode, this will accept all changes, and then on a reboot of that domain (in this case primary which is the control domain – or the physical machine) it will enable the configuration.

# ldm start-reconf primary<br />

Initiating a delayed reconfiguration operation on the primary domain.<br />

All configuration changes for other domains are disabled until the primary<br />

domain reboots, at which time the new configuration for the primary domain<br />

will also take effect.

Now we can start reclaiming some of those resources, I will assign 2 cores to the primary domain and 16GB of RAM.

# ldm set-vcpu 16 primary<br />

------------------------------------------------------------------------------<br />

Notice: The primary domain is in the process of a delayed reconfiguration.<br />

Any changes made to the primary domain will only take effect after it reboots.<br />

------------------------------------------------------------------------------<br />

ldm set-memory 16G primary<br />

------------------------------------------------------------------------------<br />

Notice: The primary domain is in the process of a delayed reconfiguration.<br />

Any changes made to the primary domain will only take effect after it reboots.<br />

------------------------------------------------------------------------------

I like to add configurations often when we are doing a lot of changes.

# ldm add-config reduced-resources

Next we will need some services to allow us to provision disks to domains and to connect to the console of domains for the purposes of installation or administration.

# ldm add-vdiskserver primary-vds0 primary<br />

------------------------------------------------------------------------------<br />

Notice: The primary domain is in the process of a delayed reconfiguration.<br />

Any changes made to the primary domain will only take effect after it reboots.<br />

------------------------------------------------------------------------------<br />

# ldm add-vconscon port-range=5000-5100 primary-vcc0 primary<br />

------------------------------------------------------------------------------<br />

Notice: The primary domain is in the process of a delayed reconfiguration.<br />

Any changes made to the primary domain will only take effect after it reboots.<br />

------------------------------------------------------------------------------

Let's add another configuration to bookmark our progress.

# ldm add-config initial-services

We need to enable the Virtual Network Terminal Server service, this allows us to telnet from the control domain into the other domains.

# svcadm enable vntsd

Finally a reboot will put everything into action.

# reboot

When the system comes back up we should see a drastically different LDM configuration.

Identify PCI Root Complexes

All the T5-2’s that I have looked at have been laid out the same, with the SAS HBA and onboard NIC on pci_0 and pci_2, and the PCI Slots on pci_1 and pci_3.  So to split everything evenly pci_0 and pci_1 stay with the primary, while pci_2 and pci_3 go to the alternate. However so that you understand how we know this I will walk you through identifying the complex as well as the discreet types of devices.

# ldm ls -l -o physio primary</p>

<p>NAME<br />

primary</p>

<p>IO<br />

DEVICE PSEUDONYM OPTIONS<br />

pci@340 pci_1<br />

pci@300 pci_0<br />

pci@3c0 pci_3<br />

pci@380 pci_2<br />

pci@340/pci@1/pci@0/pci@4 /SYS/MB/PCIE5<br />

pci@340/pci@1/pci@0/pci@5 /SYS/MB/PCIE6<br />

pci@340/pci@1/pci@0/pci@6 /SYS/MB/PCIE7<br />

pci@300/pci@1/pci@0/pci@4 /SYS/MB/PCIE1<br />

pci@300/pci@1/pci@0/pci@2 /SYS/MB/SASHBA0<br />

pci@300/pci@1/pci@0/pci@1 /SYS/MB/NET0<br />

pci@3c0/pci@1/pci@0/pci@7 /SYS/MB/PCIE8<br />

pci@3c0/pci@1/pci@0/pci@2 /SYS/MB/SASHBA1<br />

pci@3c0/pci@1/pci@0/pci@1 /SYS/MB/NET2<br />

pci@380/pci@1/pci@0/pci@5 /SYS/MB/PCIE2<br />

pci@380/pci@1/pci@0/pci@6 /SYS/MB/PCIE3<br />

pci@380/pci@1/pci@0/pci@7 /SYS/MB/PCIE4

This shows us that pci@300 = pci_0, pci@340 = pci_1, pci@380 = pci_2, and pci@3c0 = pci_3.

Map Local Disk Devices To PCI Root

First we need to determine which disk devices are in the zpool, so that we know which ones that cannot be removed.

# zpool status rpool<br />

pool: rpool<br />

state: ONLINE<br />

scan: resilvered 70.3G in 0h8m with 0 errors on Fri Feb 21 05:56:34 2014<br />

config:</p>

<p>NAME STATE READ WRITE CKSUM<br />

rpool ONLINE 0 0 0<br />

mirror-0 ONLINE 0 0 0<br />

c0t5000CCA04385ED60d0 ONLINE 0 0 0<br />

c0t5000CCA0438568F0d0 ONLINE 0 0 0</p>

<p>errors: No known data errors

Next we must use mpathadm to find the Initiator Port Name.  To do that we must look at slice 0 of c0t5000CCA04385ED60d0.

# mpathadm show lu /dev/rdsk/c0t5000CCA04385ED60d0s0<br />

Logical Unit: /dev/rdsk/c0t5000CCA04385ED60d0s2<br />

mpath-support: libmpscsi_vhci.so<br />

Vendor: HITACHI<br />

Product: H109060SESUN600G<br />

Revision: A606<br />

Name Type: unknown type<br />

Name: 5000cca04385ed60<br />

Asymmetric: no<br />

Current Load Balance: round-robin<br />

Logical Unit Group ID: NA<br />

Auto Failback: on<br />

Auto Probing: NA</p>

<p>Paths:<br />

Initiator Port Name: w5080020001940698<br />

Target Port Name: w5000cca04385ed61<br />

Override Path: NA<br />

Path State: OK<br />

Disabled: no</p>

<p>Target Ports:<br />

Name: w5000cca04385ed61<br />

Relative ID: 0

Our output shows us that the initiator port is w5080020001940698.

# mpathadm show initiator-port w5080020001940698<br />

Initiator Port: w5080020001940698<br />

Transport Type: unknown<br />

OS Device File: /devices/pci@300/pci@1/pci@0/pci@2/scsi@0/iport@1<br />

Initiator Port: w5080020001940698<br />

Transport Type: unknown<br />

OS Device File: /devices/pci@300/pci@1/pci@0/pci@2/scsi@0/iport@2<br />

Initiator Port: w5080020001940698<br />

Transport Type: unknown<br />

OS Device File: /devices/pci@300/pci@1/pci@0/pci@2/scsi@0/iport@8<br />

Initiator Port: w5080020001940698<br />

Transport Type: unknown<br />

OS Device File: /devices/pci@300/pci@1/pci@0/pci@2/scsi@0/iport@4

So we can see that this particular disk is on pci@300, which is pci_0.

Map Ethernet Cards To PCI Root

First we must determine the underlying device for each of our network interfaces.

# dladm show-phys net0<br />

LINK MEDIA STATE SPEED DUPLEX DEVICE<br />

net0 Ethernet up 10000 full ixgbe0

In this case ixgbe0, we can then look at the device tree to see where it is pointing to to find which PCI Root this device is connected to.

# ls -l /dev/ixgbe0<br />

lrwxrwxrwx 1 root root 53 Feb 12 2014 /dev/ixgbe0 -&gt; ../devices/pci@300/pci@1/pci@0/pci@1/network@0:ixgbe0

Now we can see that it is using pci@300, which translates into pci_0.

Map Infiniband Cards to PCI Root

Again let's determine the underlying device name of our infiniband interfaces, on my machine they were defaulted at net2 and net3, however, I had previously renamed the link to ib0 and ib1 for simplicity.  This procedure is very similar to Ethernet cards.

# dladm show-phys ib0<br />

LINK MEDIA STATE SPEED DUPLEX DEVICE<br />

ib0 Infiniband up 32000 unknown ibp0

In this case our device is ibp0.  So now we just check the device tree.

# ls -l /dev/ibp0<br />

lrwxrwxrwx 1 root root 83 Nov 26 07:17 /dev/ibp0 -&gt; ../devices/pci@380/pci@1/pci@0/pci@5/pciex15b3,673c@0/hermon@0/ibport@1,0,ipib:ibp0

We can see by the path, that this is using pci@380 which is pci_2.

Map Fibre Channel Cards to PCI Root

Now perhaps we need to have some Fibre Channel HBA’s split up as well, first thing we must do is look at the cards themselves.

# luxadm -e port<br />

/devices/pci@300/pci@1/pci@0/pci@4/SUNW,qlc@0/fp@0,0:devctl NOT CONNECTED<br />

/devices/pci@300/pci@1/pci@0/pci@4/SUNW,qlc@0,1/fp@0,0:devctl NOT CONNECTED

We can see here that these use pci@300 which is pci_0.

The Plan

Basically we are going to split our PCI devices by even and odd, with even staying with Primary and odd going with Alternate.  On the T5-2, this will result on the PCI-E cards on the left side being for the primary, and the cards on the right for the alternate.

Here is a diagram of how the physical devices are mapped to PCI Root Complexes.

FIGURE 1.1 – Oracle SPARC T5-2 Front View

FIGURE 1.2 – Oracle SPARC T5-2 Rear View

References

SPARC T5-2 I/O Root Complex Connections – https://docs.oracle.com/cd/E28853_01/html/E28854/pftsm.z40005601508415.html

SPARC T5-2 Front Panel Connections – https://docs.oracle.com/cd/E28853_01/html/E28854/pftsm.bbgcddce.html#scrolltoc

SPARC T5-2 Rear Panel Connections – https://docs.oracle.com/cd/E28853_01/html/E28854/pftsm.bbgdeaei.html#scrolltoc

SPARC Logical Domains: Live Migration

One of the ways that we are able to accomplish regularly scheduled maintenance is by utilizing Live Migration, with this we can migrate workloads from one physical machine to another without having service interruption.  The way that it is done with Logical Domains is much more flexible than with most other hypervisor solutions, it doesn’t require any complicated cluster setup, no management layer, so you could literally utilize any compatible hardware at the drop of the hat.

This live migration article also focuses on some technology that I have written on, but not yet published (should be published within the next week), this technology is Alternate Service Domains, if you are using this then Live Migration is still possible, and if you are not using it, then Live Migration is actually easier (as the underlying devices are simpler, so it is simpler to match them).

Caveats to Migration

  • Virtual Devices must be accessible on both servers, via the same service name (though the underlying paths may be different).
  • IO Domains cannot be live migrated.
  • Migrations can be either online “live” or offline “cold” the state of the domain determines if it is live or cold.
  • When doing a cold migration virtual devices are not checked to ensure they exist on the receiving end, you will need to check this manually.

Live Migration Dry Run

I recommend performing a dry run of any migration prior to performing the actual migration.  This will highlight any configuration problems prior to the migration happening.

# ldm migrate-domain -n ldom1 root@server<br />

Target Password:

This will generate any errors that would generate in an actual migration, however it will do so without actually causing you problems.

Live Migration

When you are ready to perform the migration then remove the dry run flag.  This process will also do the appropriate safety checks to ensure that everything is good on the receiving end.

# ldm migrate-domain ldom1 root@server<br />

Target Password:

Now the migration will proceed and unless something happens it will come up on the other system.

Live Migration With Rename

We can also rename the logical domain as part of the migration, we simply specify the new name.

# ldm migrate-domain ldom1 root@server:ldom2<br />

Target Password:

In this case, the original name was ldom1 and the new name is ldom2.

Common Errors

Here are some common errors.

Bad Password or No LDM on Target

# ldm migrate-domain ldom1 root@server<br />

Target Password:<br />

Failed to establish connection with ldmd(1m) on target: server<br />

Check that the 'ldmd' service is enabled on the target machine and<br />

that the version supports Domain Migration. Check that the 'xmpp_enabled'<br />

and 'incoming_migration_enabled' properties of the 'ldmd' service on<br />

the target machine are set to 'true' using svccfg(1M).

Probable Fixes – Ensure you are attempting to migrate to the correct hypervisor, you have the username/password combination correct, and that the user has the appropriate level of access to ldmd and that ldmd is running.

Missing Virtual Disk Server Devices

# ldm migrate-domain ldom1 root@server<br />

Target Password:<br />

The number of volumes in mpgroup 'zfs-ib-nfs' on the target (1) differs<br />

from the number on the source (2)<br />

Domain Migration of LDom ldom1 failed

Probable Fixes – Ensure that the underlying virtual disk devices match, if you are using mpgroups, then the entire mpgroup must match on both sides.

Missing Virtual Switch Device

# ldm migrate-domain ldom1 root@server<br />

Target Password:<br />

Failed to find required vsw alternate-vsw0 on target machine<br />

Domain Migration of LDom logdom1 failed

Probable Fixes – Ensure that the underlying virtual switch devices match on both locations.

Check Migration Progress

One thing to keep in mind is that during the migration process, the hypervisor that is being evacuated is the authoritative one in terms of controlling the process, so status should be checked there.

source# ldm list -o status ldom1<br />

NAME<br />

logdom1 </p>

<p>STATUS<br />

 OPERATION PROGRESS TARGET<br />

 migration 20% 172.16.24.101:logdom1

It can however be checked on the receiving end, though it will look a little bit different.

target# ldm list -o status logdom1<br />

NAME<br />

logdom1</p>

<p>STATUS<br />

 OPERATION PROGRESS SOURCE<br />

 migration 30% ak00176306-primary

The big thing to notice is that it shows the source on this side, also if we changed the name as part of the migration it will also show the name using the new name.

Cancel Migration

Of course, if you need to cancel a migration, this would be done on the hypervisor that is being evacuated, since it is authoritative.

# ldm cancel-operation migration ldom1<br />

Domain Migration of ldom1 has been canceled

This will allow you to cancel any accidentally started migrations, however likely anything that you needed to cancel would generate an error before needing to do this.

Cross CPU Considerations

By default, logical domains are created to use very specific CPU features based on the hardware it runs on, as such live migration only works by default on the exact same CPU type and generation.  However, if we change the CPU

Native – Allows migration between same CPU type and generation.

Generic – Allows the most generic processor feature set to allow for widest live migration capabilities.

Migration Class 1 – Allows migration between T4, T5 and M5 server classes (also supports M10 depending on firmware version)

SPARC64 Class 1 – Allows migration between Fujitsu M10 servers.

Here is an example of how you would change the CPU architecture of a domain.  I personally recommend using this sparingly and building your hardware infrastructure in a way where you have the capacity on the same generation of hardware, however, in certain circumstances this can make a lot of sense if the performance implications are not too great.

# ldm set-domain cpu-arch=migration-class1 ldom1

I personally wouldn’t count on the Cross-CPU functionality, however, in some cases it might make sense for your situation, either way, Live Migration of Logical Domains is done in a very effective manner and adds a lot of value.

Solaris 11: Configure IP Over Infiniband Devices

In this article we will be going over the configuration of an infiniband interface with the IPoIB protocol on Solaris 11, specifically Solaris 11.2 (previous versions of Solaris 11 should work the same, however, there have been changes in the ipadm and dladm commands).

Identify Infiniband Datalinks

First we need to identify the underlying interfaces of the infiniband interfaces.  In my case net2 and net3.

# dladm show-phys<br />

LINK MEDIA STATE SPEED DUPLEX DEVICE<br />

net1 Ethernet unknown 0 unknown ixgbe1<br />

net0 Ethernet up 1000 full ixgbe0<br />

net2 Infiniband up 32000 unknown ibp0<br />

net3 Infiniband up 32000 unknown ibp1<br />

net5 Ethernet up 1000 full vsw0

Another way to confirm the infiniband interfaces is to use the show-ib command.

# dladm show-ib<br />

LINK HCAGUID PORTGUID PORT STATE GWNAME GWPORT PKEYS<br />

net2 10E0000128EBC8 10E0000128EBC9 1 up kel01-gw01 0a-eth-1 7FFF,FFFF<br />

 kel01-gw02 0a-eth-1<br />

net3 10E0000128EBC8 10E0000128EBCA 2 up kel01-gw01 0a-eth-1 7FFF,FFFF<br />

 kel01-gw02 0a-eth-1

Rename Infiniband Datalinks

I like to rename the datalinks to ib0 and ib1, it makes it easier to keep everything nice and tidy.

# dladm rename-link net2 ib0<br />

# dladm rename-link net3 ib1

Now to show the updated datalinks.

# dladm show-phys<br />

LINK MEDIA STATE SPEED DUPLEX DEVICE<br />

net1 Ethernet unknown 0 unknown ixgbe1<br />

net0 Ethernet up 1000 full ixgbe0<br />

ib0 Infiniband up 32000 unknown ibp0<br />

ib1 Infiniband up 32000 unknown ibp1<br />

net5 Ethernet up 1000 full vsw0

Now in subsequent actions we will use ib0 and ib1 as our datalinks.

Create Infiniband Partition

First, let's talk about partitions, partitions are most closely related to VLANs.  However the purpose of partitions is to provide isolated segments, so there is no concept of a “router” on IB.  So your use case might be for isolating storage or database services or even isolating customers from one another (which you definitely should do if you have a multitenant environment where customers have access to the operating system.  So what we want to do is identify the partition to be created, if you do not use IB partitioning, then you will need to use the “default” partition of ffff.

# dladm create-part -l ib0 -P 0xffff pffff.ib0

If you do use partitioning, then you will need to define the partition that you wish to use, for this example 7fff.  Which partition to use is determined by the dladm show-ib output, it lists the PKEY that are available, these are partitions.

# dladm create-part -l ib0 -P 0x7fff p7fff.ib0

Now lets review the partitions.

# dladm show-part<br />

LINK PKEY OVER STATE FLAGS<br />

pffff.ib0 FFFF ib0 unknown ----<br />

p7fff.ib0 7FFF ib0 unknown ----

We now have our two partitions defined.

Create IP Interfaces

Now that we have the Infiniband pieces configured, we simply create the IP interfaces, so that we can subsequently assign an IP address, the IP interfaces are named as follows (ibpartition.interfacename).  Below is for the “default” partition.

# ipadm create-ip pffff.ib0

And for our named partition for 7fff we create an interface as well.

# ipadm create-ip p7fff.ib0

Now we have our interfaces configured correctly.

Create IP Address

Now the easy part this is exactly the same as we would do with a standard ethernet interface.  Assign a static IP address for the default partition.

# ipadm create-addr -T static -a 10.1.10.11/24 pffff.ib0/v4

Also for our named partition.

# ipadm create-addr -T static -a 10.2.10.11/24 p7fff.ib0/v4

Now a few ping tests and we are in business.  Remember you will not be able to ping from one partition to another, so you will need to identify a few endpoints on your existing Infiniband networks to test your configuration.

Adventures in ZFS: Mirrored Rpool

It always makes sense to have a mirrored rpool for your production systems, however, that is not always how they are configured.  This really simple procedure is also critical.

Create a Mirrored Zpool

Check the existing devices to identify the one currently in use.

# zpool status rpool<br />

  pool: rpool<br />

 state: ONLINE<br />

 scan: none requested<br />

config:</p>

<p> NAME STATE READ WRITE CKSUM<br />

 rpool ONLINE 0 0 0<br />

 c0t5000CCA0436359CCd0 ONLINE 0 0 0</p>

<p>errors: No known data errors

Once we know which one is currently in use, we need to find a different one to mirror onto.

# format<br />

Searching for disks...done</p>

<p>AVAILABLE DISK SELECTIONS:<br />

  1. c0t5000CCA0436359CCd0 &lt;HITACHI-H109030SESUN300G-A606-279.40GB&gt;<br />

 /scsi_vhci/disk@g5000cca0436359cc<br />

 /dev/chassis/SPARC_T5-2.AK00176306/SYS/SASBP/HDD0/disk<br />

  1. c0t5000CCA043650CD8d0 &lt;HITACHI-H109030SESUN300G-A31A cyl 46873 alt 2 hd 20 sec 625&gt; solaris<br />

 /scsi_vhci/disk@g5000cca043650cd8<br />

 /dev/chassis/SPARC_T5-2.AK00176306/SYS/SASBP/HDD1/disk<br />

Specify disk (enter its number):

Then we can build our mirrored rpool, this part is exactly the same as creating a mirror for any other zpool.

# zpool attach rpool c0t5000CCA0436359CCd0 c0t5000CCA043650CD8d0<br />

vdev verification failed: use -f to override the following errors:<br />

/dev/dsk/c0t5000CCA043650CD8d0s0 contains a ufs filesystem.<br />

/dev/dsk/c0t5000CCA043650CD8d0s6 contains a ufs filesystem.<br />

Unable to build pool from specified devices: device already in use

Now in some cases, the new disk will have an existing file system on it, in that case we will need to force it, however please use caution when using force, this could cause you problems if you have multiple zpools on a system.

# zpool attach -f rpool c0t5000CCA0436359CCd0 c0t5000CCA043650CD8d0<br />

Make sure to wait until resilver is done before rebooting.

Now that will start the resilvering process, and we must wait for that to finish completely before rebooting.  So depending on the size of your disks it might be time for coffee or lunch.

# zpool status rpool<br />

 pool: rpool<br />

 state: DEGRADED<br />

status: One or more devices is currently being resilvered. The pool will<br />

 continue to function in a degraded state.<br />

action: Wait for the resilver to complete.<br />

 Run 'zpool status -v' to see device specific details.<br />

 scan: resilver in progress since Fri Nov 28 10:11:03 2014<br />

 224G scanned<br />

 6.67G resilvered at 160M/s, 2.86% done, 0h23m to go<br />

config:</p>

<p> NAME STATE READ WRITE CKSUM<br />

 rpool DEGRADED 0 0 0<br />

 mirror-0 DEGRADED 0 0 0<br />

 c0t5000CCA0436359CCd0 ONLINE 0 0 0<br />

 c0t5000CCA043650CD8d0 DEGRADED 0 0 0 (resilvering)</p>

<p>errors: No known data errors

Lets check again and see if this is nearly ready.

# zpool status rpool<br />

pool: rpool<br />

state: ONLINE<br />

scan: resilvered 224G in 0h27m with 0 errors on Fri Nov 28 10:38:25 2014<br />

config:</p>

<p>NAME STATE READ WRITE CKSUM<br />

rpool ONLINE 0 0 0<br />

mirror-0 ONLINE 0 0 0<br />

c0t5000CCA0436359CCd0 ONLINE 0 0 0<br />

c0t5000CCA043650CD8d0 ONLINE 0 0 0</p>

<p>errors: No known data errors

Now if you are just trying to mirror any zpool that is the end of it.  However if this is rpool then your mirror will not be worth anything if it doesn’t include the boot blocks.

Install Boot Blocks on SPARC

If your system is SPARC, you will use the installboot utility to install the boot blocks on the disk to ensure you will be able to boot from it in the event of primary disk failure.

# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t5000CCA043650CD8d0s0<br />

WARNING: target device /dev/rdsk/c0t5000CCA043650CD8d0s0 has a versioned bootblock but no versioning information was provided.<br />

bootblock version installed on /dev/rdsk/c0t5000CCA043650CD8d0s0 is more recent or identical<br />

Use -f to override or install without the -u option

Again if this disk is not brand new it might have existing boot blocks on it which we will need to force overwrite.

# installboot -f -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t5000CCA043650CD8d0s0

This wraps it up for a SPARC installation, it, of course, makes sense to test booting to the second disk as well.

Install Boot Blocks on x86

If you are using an x86 system, then you will need to use the installgrub utility.

# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c0t5000CCA043650CD8d0s0

There you have it.  We have successfully mirrored our x86 system as well.

Linux KVM: Bridging a Bond on CentOS 6.5

Today we are going to hop back into the KVM fray, and take a  look at using CentOS as a hypervisor., and configuring very resilient network connections to support our guests.  Of course these instructions should be valid on Red Hat Linux and Oracle Linux as well, though there is a little more to be done around getting access to the repos on those distributions…

Enable Bonding

I am assuming this is a first build for you, so this step might not be applicable, but it won’t hurt anything.

# modprobe --first-time bonding

Configure the Physical Interfaces

In our example we will be using two physical interfaces, eth0 and eth1.  Here are the interface configuration files.

# cat /etc/sysconfig/network-scripts/ifcfg-eth0<br />

DEVICE=eth0<br />

HWADDR=XX:XX:XX:XX:XX:XX<br />

ONBOOT=yes<br />

NM_CONTROLLED=no<br />

BOOTPROTO=none<br />

MASTER=bond0<br />

SLAVE=yes<br />

USERCTL=no

# cat /etc/sysconfig/network-scripts/ifcfg-eth1<br />

DEVICE=eth1<br />

HWADDR=XX:XX:XX:XX:XX:XX<br />

ONBOOT=yes<br />

NM_CONTROLLED=no<br />

BOOTPROTO=none<br />

MASTER=bond0<br />

SLAVE=yes<br />

USERCTL=no

Configure the Bonded Interface

Here we are going to bond the interfaces together, which will increase the resiliency of the interface.

# cat /etc/sysconfig/network-scripts/ifcfg-bond0<br />

DEVICE=bond0<br />

ONBOOT=yes<br />

NM_CONTROLLED=no<br />

BOOTPROTO=none<br />

USERCTL=no<br />

BONDING_OPTS=&quot;mode=1 miimon=100&quot;<br />

BRIDGE=br0

Configure the Bridge

The final step is to configure the bridge itself, which is what KVM creates the vNIC on to allow for guest network communication.

# cat /etc/sysconfig/network-scripts/ifcfg-br0<br />

DEVICE=br0<br />

TYPE=Bridge<br />

ONBOOT=yes<br />

NM_CONTROLLED=no<br />

BOOTPROTO=none<br />

USERCTL=no<br />

IPADDR=192.168.1.10<br />

NETMASK=255.255.255.0<br />

GATEWAY=192.168.1.1<br />

DELAY=0

Service Restart

Finally the easy part.  Now one snag I ran into.  If you created IP addresses on bond0, then you will have a tough time getting rid of that with a service restart alone.  I found it was easier to reboot the box itself.

# service network restart

BlackBerry OS 10: Caldav Setup with Zimbra

I have owned my Blackberry Z10, going on a year now, and I have absolutely loved it.  However, one of the issues that I have fought was in integrating it with my Zimbra Installation.  Email was easy, the IMAP protocol sorted that out easily enough… However, calendars turned out to be more of a challenge than I expected.

Here is the versions that I validated these steps on.

  • Blackberry Z10 with 10.2.1.2977
  • Zimbra Collaboration Server 8.5.0

Here is how to get it done.

Figure 1-1 – System Settings

Figure 1-1 gets us started, I am assuming that you know how to find the settings on BB10, but once there go into the Accounts section.

Figure 1-2 – Accounts

Figure 1-2 is a listing of all of the existing accounts, with mine obfuscated, of course, however, we are going to be adding another one, so we select Add Account.

Figure 1-3 – Add Accounts

You can see above in Figure 1-3, that we don’t use the “Subscribed Calendar” selection, but instead go to Advanced.  When I used Subscribed Calendar, it was never able to successfully perform a synchronization.

Figure 1-4 – Advanced Setup

In Figure 1-4 we are selecting CalDAV as the type of Account to use.  Also a little footnote, I was unable to get CardDAV working. I will provide an update or another article if I find a way around this.

Figure 1-5 – CalDAV Settings

In Figure 1-5 we are populating all of the information needed to make a connection.  Please keep in mind, that we need to use user@domain.tld for the username, and the Server Address should be in the following format:  https://zimbra.domain.tld/dav/user@domain.tld/Calendar. The important bits here are (1) https – I suspect http works as well, but I did not validate (2) username – the username is a component of the URI, this makes it a little tough to implement for less sophisticated users (3) Calendar – the default calendar for all Zimbra users is named “Calendar” – with a capital “C” not sure if you can have calendars with other names, but this is the name needed for most situations.

Now set your password and sync interval and you should be ready to go.

IT Trends, Change and The Future…A Conversation With an Industry Veteran

As a technology and healthcare centric marketing firm, we at illumeture work with emerging companies in achieving more right conversations with right people. Part of that work comes in learning and sharing the thought leadership and subject matter expertise of our clients with the right audiences. Mark Johnson is Vice President with GuideIT responsible for Account Operations and Delivery.  Prior to joining GuideIT, Mark spent 23 years with Perot Systems and Dell, the last 6 years leading business development teams tasked with solutioning, negotiating and closing large healthcare IT services contracts.  We sat down with Mark for his perspective on what CIOs should be thinking about today. 

Q:  You believe that a number of fundamental changes are affecting how CIOs should be thinking about both how they consume and deliver IT services – can you explain?

A:  Sure.  At a high level, start with the growing shift from sole-source IT services providers to more of a multi-sourcing model.  A model in which CIOs ensure they have the flexibility to choose among a variety of application and services providers, while maintaining the ability to retain those functions that make sense for a strategic or financial reason.  The old sourcing model was often binary, you either retained the service or gave it to your IT outsourcing vendor.  Today’s environment demands a third option:  the multi-source approach, or what we at GuideIT call “Flex-Sourcing”.

Q:  What’s driving that demand?

A:  A number of trends, some of which are industry specific.  But two that cross all industries are the proliferation of Software as a Service in the market, and cloud computing moving from infancy to adolescence.

Q:  Software as a Service isn’t new.

A:  No it isn’t.  But we’re moving from early adopters like salesforce.com to an environment where new application providers are developing exclusively for the cloud, and existing providers are executing to a roadmap to get there.  And not just business applications; hosted PBX is a great example of what used to be local infrastructure moving to a SaaS model in the cloud.  Our service desk telephony is hosted by one of our partners – OneSource, and we’re working closely with them to bring hosted PBX to our customers.  E-mail is another great example.  In the past I’d tee up email as a service to customers, usually either Gmail or Office365, but rarely got traction.  Now you see organizations looking hard at either a 100% SaaS approach for email, or in the case of Exchange, a hybrid model where organizations classify their users, with less frequent users in the cloud, and super-users hosted locally.  GuideIT uses Office365 exclusively, yet I still have thick-client Outlook on my PC and the OWA application on both my iPhone and Windows tablet.  That wasn’t the case not all that long ago and I think we take that for granted.

Q:  And you think cloud computing is growing up?

A:  Well it’s still in grade school, but yes, absolutely.  Let’s look at what’s happened in just a few short years, specifically with market leaders such as Amazon, Microsoft and Google.  We’ve gone from an environment of apprehension, with organizations often limiting use of these services for development and test environments, to leading application vendors running mission critical applications in the cloud, and being comfortable with both the performance/availability and the security of those environments.  On top of that, these industry leaders are, if you’ll excuse the comparison, literally at war with each other to drive down cost, directly benefiting their customers.  We’re a good ways away from a large organization being able to run 100% in the cloud, but the shift is on.  CIOs have to ensure they are challenging the legacy model and positioning their organizations to benefit from both the performance and flexibility of these environments, but just as importantly the cost. 

Q:  How do they do that?

A:  A good place to start is an end to end review of their infrastructure and application strategy to produce a roadmap that positions their organization to ride this wave, not be left behind carrying the burden of legacy investments.  Timing is critical; the pace of change in IT today is far more rapid than the old mainframe or client-server days and this process takes planning.  That said, this analysis should not be just about a multi-year road-map.  The right partner should be able to make recommendations around tactical initiatives, the so-called “low-hanging fruit” that will generate immediate cost savings, and help fund your future initiatives.  Second, is to be darn sure you don’t lock yourself into long-term contracts with hosting providers, or if you do ensure you retain contractual flexibility that goes well beyond contract bench-marking.  You have to protect yourself from the contracting model where vendors present your pricing in an “as a service” model, but are really just depreciating capital purchased on your behalf in the background.  You might meet your short-term financial objectives, but I promise in short order you’ll realize you left money on the table.  At Guide IT we’re so confident in what we can deliver that if a CIO engages GuideIT for an enterprise assessment, and isn’t happy with the results, they don’t pay.

Q:  You’ve spent half your career in healthcare – how do you see these trends you’ve discussed affecting the continuity of care model?

A:  Well we could chat about just that topic for quite some time.  My “ah-ha moments” tend to come from personal experience.  I’ll give you two examples.  Recently I started wearing a FitBit that syncs with my iPhone.  On a good day, the device validates my daily physical activity; but to be honest, too often reminds me that I need to do a better job of making exercise a mandatory part of my day.  Today that data is only on my smartphone – tomorrow it could be with my family physician, in my PHR, or even with my insurer to validate wellness premium discounts.  The “internet of things” is here and you just know these activity devices are the tip of the iceberg.  Your infrastructure and strategy roadmap have to be flexible enough to meet today’s requirements, but also support what we all know is coming, and in many cases what we don’t know is coming.  Today’s environment reminds me of the early thin client days that placed a premium on adopting a services-oriented architecture.

Second is my experience with the DNA sequencing service 23andme.com.  I found my health and ancestry data fascinating, and though the FDA has temporarily shut down the health data portion of the service, there will come a day very soon that we’ll view the practice of medicine without genome data as akin to the days without antibiotics and MRIs.  Just as they are doing with the EMR Adoption Model, CIOs should ask themselves where they’re at on the Healthcare Analytics Adoption Model and what their plan is to move to the advanced stages - the ones beyond reimbursement.  A customer of mine remarked the other day that what’s critical about the approach to analytics is not “what is the answer?” but rather “what is the question?”  And he’s right.

Voyage Linux: Dialog Error with Apt

This can happen on other Linux distributions, however, in this case, I found it on Voyage Linux, which is a Linux distribution for embedded hardware.

The Error

Here we are dealing with an annoyance whenever you use apt-get or aptitude.

debconf: unable to initialize frontend: Dialog<br />

debconf: (No usable dialog-like program is installed, so the dialog-based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76, &lt;&gt; line 1.)<br />

debconf: falling back to frontend: Readline

The Fix

Simply install dialog, which is the package it is not finding.  This will no longer need the failback to readline.

# apt-get install dialog

Once the dialog package has been installed the issue will no longer occur on subsequent runs of apt-get or aptitude.

Voyage Linux: Locale Error with Apt

Voyage Linux is an embedded linux distribution.  I use it on some ALIX boards I have lying around, it is very stripped down, and as such there are a few annoyances which we have to fix.

The Error

This issue happens when attempting to install/upgrade packages using apt-get or aptitude.

perl: warning: Setting locale failed.<br />

perl: warning: Please check that your locale settings:<br />

    LANGUAGE = (unset),<br />

    LC_ALL = (unset),<br />

    LANG = &quot;en_US.utf8&quot;<br />

are supported and installed on your system.<br />

perl: warning: Falling back to the standard locale (&quot;C&quot;).

The Fix

We simply need to set the locales to use en_US.UTF-8 or whichever locale is correct for your situation.

# locale-gen --purge en_US.UTF-8<br />

# echo &quot;LANG=en_US.UTF-8&quot; &gt;&gt; /etc/default/locale<br />

# update-locale

Now subsequent runs of apt-get or aptitude will no longer generate the error.

Adventures in ZFS: Splitting a Zpool
SQL Developer Crash on Fedora 20

I ran into a painful issue on Fedora 20 with SQL Developer.  Basically every time it was launched via the shortcut it would go through loading, and then disappear.

Manual Invocation of SQL Developer

When launching it via the script itself it gives us a little more information.

$ /opt/sqldeveloper/sqldeveloper.sh</p>

<p>Oracle SQL Developer<br />

Copyright (c) 1997, 2013, Oracle and/or its affiliates. All rights reserved.</p>

<p>&amp;nbsp;</p>

<p>LOAD TIME : 279#<br />

# A fatal error has been detected by the Java Runtime Environment:<br />

#<br />

# SIGSEGV (0xb) at pc=0x00000038a1e64910, pid=12726, tid=140449865832192<br />

#<br />

# JRE version: Java(TM) SE Runtime Environment (7.0_40-b43) (build 1.7.0_40-b43)<br />

# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.0-b56 mixed mode linux-amd64 compressed oops)<br />

# Problematic frame:<br />

# C 0x00000038a1e64910<br />

#<br />

# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try &quot;ulimit -c unlimited&quot; before starting Java again<br />

#<br />

# An error report file with more information is saved as:<br />

# /opt/sqldeveloper/sqldeveloper/bin/hs_err_pid12726.log<br />

[thread 140449881597696 also had an error]<br />

#<br />

# If you would like to submit a bug report, please visit:<br />

# http://bugreport.sun.com/bugreport/crash.jsp<br />

#<br />

/opt/sqldeveloper/sqldeveloper/bin/../../ide/bin/launcher.sh: line 611: 12726 Aborted (core dumped) ${JAVA} &quot;${APP_VM_OPTS[@]}&quot; ${APP_ENV_VARS} -classpath ${APP_CLASSPATH} ${APP_MAIN_CLASS} &quot;${APP_APP_OPTS[@]}&quot;

I also noticed, that while executing as root it worked.  However that clearly isn’t the “solution”

Fixing the Problem

Here we need to remove the GNOME_DESKTOP_SESSION_ID as part of the script.

$ cat /opt/sqldeveloper/sqldeveloper.sh<br />

#!/bin/bash<br />

unset -v GNOME_DESKTOP_SESSION_ID<br />

cd &quot;`dirname $0`&quot;/sqldeveloper/bin &amp;&amp; bash sqldeveloper $*

Once this was completed, SQL Developer launched clean for me.

 

Banking Institution Improves Security Management & Response

A publicly traded financial firm was seeking to better manage security requirements facing the business. Disparate systems within the IT environment required constant updating as new security patches were released, exposing the company to the risk of falling short of regulatory requirements.

GuideIT designed and implemented a patch management process to address ongoing updates within the environments. The patch management solution identified and updated over 130,000 security patches in the first 6 months.

GuideIT also provided a dedicated Incident Response Analyst to triage alerts and escalations, addressing a critical gap within the security organization. Working with the CISO, the analyst evaluated the infrastructure, policies and procedures, recommended improvements, and improved response time with alerting, reporting, and remediation.

End User Protection for Large Campus-Style Retail Environment

GuideIT provides strategic cybersecurity partnership to a campus-style commercial retail environment through consulting, infrastructure, and end-user protection security solutions to implement a defense-in- depth security strategy and position the organization for the future.

The Customer

A sprawling, campus-style retail environment routinely serves over one million annual visitors. The IT infrastructure has become an increasingly important component of the operations touching everything from facilities operations to customer care and internal communications. As the organization continues to grow, new technologies will further enhance operations and marketing outreach as it seeks to expand the customer base.

The Challenge

The organization recently sought a strategic technology partner to provide a comprehensive managed security solution protecting users and the IT environment from risks related to malware, ransomware, email threats, and critical security updates. It faced numerous challenges related to implementing and managing a defense-in-depth cybersecurity strategy.

An aging infrastructure and application environment paired with a lack of internal resources led to a struggle on the part of the organization to keep pace with a changing threat landscape and cybersecurity best practices. The customer realized that email in particular represented significant risk due to the ever-increasing volume of spam and potentially dangerous attachments at the email threat vector. non-technical end users did not have the proper training or awareness to protect the organization, leading to increased risk of a potentially damaging attack.

The existing security solution did NOT:

» Actively monitor the environment
» Centrally manage patches and updates
» Enable scalability & adaptability
» Provide for remote management & Maintenance

GuideIT Cyber Security solutions safeguard organizations against malicious cyber threats. We utilize individualized approach to provide comprehensive protection that aligns with industry best practices. GuideIT end-user protection enables defense-in-depth strategies for end-user devices such as laptops, desktops and mobile devices which are targeted by malicious actors to gain access into enterprise networks.

The Solution

GuideIT developed a solution to holistically address shortcomings of the aging infrastructure and application environment with a fully managed approach. Comprehensive management and monitoring services focused on endpoint security would address the risk to the environment at the end-user attack surface. A robust strategy for patch management would ensure the environment was properly safeguarded against existing vulnerabilities with the latest updates available. Email security comprising of inbound traffic scanning, link protection, and threat quarantine, would mitigate the risk of ransomware phishing attempts, and malicious payloads. A centrally managed data protection strategy would protect against data loss with full data encryption and in browser web monitoring.

Solution Benefits

» Central management & monitoring
» End-to-end data encryption
» Web monitoring & protection
» Real-time malware protection
» Patch management and deployment
» Email link & attachment scanning
» Outbound data protection
» End user threat awareness training

Why GuideIT

IDENTIFY > PROTECT > DETECT > EDUCATE

GuideIT takes a holistic view of the security environment to evaluate the full threat landscape and identify unique vulnerabilities within an organization. Customers benefit from best-in-class security tools paired with a consultative, strategic approach. Leveraging a defense-in-depth framework that aligns with NIST best practices, the GuideIT security solutions methodology focuses on root cause analysis, visibility, and data-driven decision making to deliver an end-to-end cybersecurity strategy that hardens the IT infrastructure against attacks while also promoting security awareness within the entire organization.

GuideIT developed a comprehensive plan to transform the cybersecurity strategy with a defense-in-depth model. Levering industry best practices and the NIST framework, GuideIT assessed the landscape to identify threats and vulnerabilities, created a plan to address risks and promote awareness, and deployed solutions to secure the infrastructure and change end-user behavior, securing the IT environment.

The Implementation

1. ASSESSMENT - Upon initiation of the project, GuideIT quickly performed a comprehensive assessment of the environment to identify and evaluate legacy and stand-alone security solutions in place. High risk devices were identified and prioritized for phase one. Infrastructure and existing security postures were evaluated and tested.
2. PLANNING - With data collected from the assessment, GuideIT cybersecurity professionals developed a comprehensive plan and to address issues with patch management, end-point protection, infrastructure security, and email security.
3.DEPLOYMENT - With data collected from the assessment, agents were deployed within a week to immediately deploy the centrally managed end-point protection solution. The patching program was also deployed targeting the most critical and vulnerable devices first.

The Results

The team identified systems in the environment that had not been actively patched in over six months. The systems were updated and brought into compliance with the policy. Initially, less than 35% of the environment was current with patches released within 30 days. Since implementation of new patch management processes and tools, the environment now maintains a 30-day rolling update ratio of over 95%.

Since the deployment of managed anti-virus, over 400 threats associated with malware, exploits and attempted access have been either blocked or resolved, ensuring the endpoints and users are secure. The email security solution initially scanned over 83,000 emails effectively protecting the organization from nearly 20 different malware threats and over 50 individual phishing attempts. 27,000 links were scanned and protected, resulting in 70,000 clean messages being successfully derived during the initial deployment.

GuideIT Once Again Recognized Among Fastest Growing Private Companies by SMU Caruth Institute & Dallas Business Journal

Monday, October 26, 2020 – Plano, TX – GuideIT, a leading provider of managed IT and cloud solutions, today announced that it has once again been named one of the fastest growing entrepreneurial companies for a third year in the SMU Cox Dallas 100™ awards.

The Dallas 100, co-founded by the SMU Caruth Institute for Entrepreneurship and the Dallas Business Journal, recognizes the innovative spirit, determination and business acumen of area Dallas-area entrepreneurs.  The award focuses not only on growth, but an organization’s character and creditworthiness.

“We are once again honored to be selected for the Dallas 100.” said Chuck Lyles, CEO for GuideIT. “It demonstrates our continued commitment to bringing leading edge solutions to market. We place a high value on the entrepreneurial spirit which has contributed to the success and growth which we have experienced over the last several years.”

About GuideIT

GuideIT delivers solutions to drive business success through technology. Through consulting, managed services, digital business, and cybersecurity solutions, GuideIT partners with customers, simplifies the complex, and inspires confidence while delivering technology with an industry specific context to enable the creation of business value and create an IT experience that delivers. 

Founded in 2013 and building on a heritage that dates to the industry’s founding, GuideIT has been recognized for its service quality, positive work environment and growth.  Learn more at www.guideit.com

Healthcare Management Organization Realizes Cost Savings with AWS

Customer Profile

Our customer is a premier national provider of population healthcare management programs. For more than 40 years, they have offered value-added programs to plan sponsors that improve the overall health of engaged participants, including Integrated Clinical Solutions, Chronic Care Management, Behavioral Health Solutions, Wellness/Lifestyle Coaching, and Care Coordination.

The Challenge

Our customer was experiencing cost inefficiencies with their current server which caused them to have less flexibility and control over their solution.

The Solution

GuideIT recommended moving the customer from their current server, Armor, and moving it into AWS EC2 and AWS SE. Through this solution, the customer will realize a reduction in cost, and greater durability and recoverability.

AWS Services

  • Managed Microsoft Sequel Server (RDS)
  • AWS EC2 with Microsoft Server
  • AWS S3

Metrics for Success

  • Introduce cost savings with new AWS server
  • Increase data durability and recoverability
  • Reduce administration needs

The Result

  • Achieved greater than 30% reduction in cost through new solution
  • Successfully migrated server from Armor into a Managed Microsoft SQL Server
  • Eliminated the costly necessity of administrators manually pulling reports from the old system
  • Increased durability and recoverability through daily snapshots of AWS EC2 and AWS RDS

The Integration Architecture

  • TIBCO BusinessWorks installed on the EC2 instance retrieves Medical files from HMC clients, pushes a copy to AWS S3, processes files and pushes converted X12 data to HMC Healthworks
  • The file processes match customer data and create unique ids using Amazon RDS “Microsoft SQL Server”
  • Snapshots of AWS EC2 and AWS RDS are created daily to AWS S3
  • Recovery involves restoring snapshots and rerunning files for day

 

Introducing a New Website and Online Experience from GuideIT
Introducing a New Website and Online Experience from GuideIT

As the world of technology continues to evolve into the future at a rapid pace, so does GuideIT. We are proud to announce that our new and improved website is here to provide more functionality for your outsourced IT experience. Here are all of the ways that our revamped website is working harder to provide a new online experience for your GuideIT services:

Continuing
Education from GuideIT

Our new website provides continuing education on all of the latest trends in the IT industry from our perspective. Here, you can stay up to date on the changing world of technology by diving into the details of what makes it great. We understand that being dedicated to IT strategy and transformation means providing our clients with the details they need to succeed.

A New
Design to Match Our Services

Our new website comes complete with an updated look designed to make navigating through our information easier. Just like with our services, we want the online experience we provide our customers to be as quick, simple and efficient as possible. We respect your time and money in everything we do, and our new website is certainly no exception to that rule.

Case
Studies to Learn About Our Services

We have implemented several case studies that are aimed at helping our customers learn more about our services and understand their importance. Here, you can get an in-depth look at how GuideIT has helped a countless number of companies optimize their technology and achieve their business goals. Take a look at our new case studies today to learn about the impact our services have made for our clients.

No matter how you hope to achieve operational excellence in your business, GuideIT is here to help with the same services you know and love. From managed IT services to management consulting and all of your cyber security needs, we provide services that can help businesses of all kinds thrive. Want to learn more about how GuideIT can help you? Check out our blog today!

The Latest Trends in Information Technology

GuideIT’s very own Chuck Lyles, CEO, recently sat in on the HIMSS SoCal Podcast to discuss emerging trends in information technology and how it relates to the healthcare industry. Listen in to learn about COVID-19’s impact to the IT industry, the importance of the Clinical Service Desk and the latest outsourcing trends in technology. Click the link below to learn more.

Catalyst Health and GuideIT’s Strategic Services Relationship

GuideIT serves as Catalyst Health’s strategic IT services partner and enables better results through increased customer satisfaction, improved cost-efficiency ratios, and greater infrastructure reliability and availability. Services include clinical and technical service desk, end user support, service management, infrastructure technology operations support, network management, and information technology security support.

The Customer

Catalyst Health is a URAC-accredited clinically integrated network of primary care physicians who have come together to provide high-quality care, helping communities thrive. Catalyst Health began its network of independent primary care physicians in 2015 in North Texas. In the four short years that followed, Catalyst Health has grown to nearly 1,000 primary care providers, with over 300 office locations, and 100 care team members, serving over one million patients. To date, Catalyst Health has saved more than $55 million for the communities it serves. Catalyst Health coordinates care, improves health, and lowers cost – creating sustainable and predictable value.

The Challenge

To support the rapid growth they were experiencing, Catalyst Health needed to transform their current Information Technology environment. The organization was building a new care management platform and expanding upon their existing professional service offerings to independent physician practices. Support of these initiatives would require remediating their current environment as the existing infrastructure support model was too costly.

The organization was seeking a partnership with a Managed Services provider to aid in implementing and supporting a 24x7 scalable model that would improve overall customer satisfaction, provide greater alignment to the business owners, and reduce overall cost as growth occurred. To achieve success of these initiatives, the organization would need to address the following:

  • Implement a high availability infrastructure to minimize downtime and service interruptions
  • Greater focus on end users and responsiveness with Service Level metrics and continuous improvement to support caregivers across the organization
  • Implement ITIL-based best practice standards across the organization that align IT services with the needs of the business
  • Improve cost efficiency ratio as growth occurs

“The integration of technology has been a vital part of Catalyst’s growth, driving our innovation and allowing us to accomplish our mission of helping communities thrive. GuideIT’s strategic direction has not only made our internal team more connected but has also allowed the physicians in our network to strengthen their relationships with their patients, all while saving everyone time and money. It’s been a win-win situation for all”
- Dr. Christopher Crow

The Solution

Catalyst Health determined the best approach to achieve the objectives of the business expansion would be to engage GuideIT to tap into their Managed Services solutions that would assume IT leadership and provide subject matter experts. GuideIT would deliver a solution that encompasses infrastructure management, monitoring, end user support, clinical applications service desk, technical service desk, vendor management, call center technology support, and security services. This would provide Catalyst Health with the environment to deploy a new Electronic Medical Record platform which will enable greater access to clinical data for caregivers and offer improved responsiveness while improving the long-term health of their patients. Goals of the IT partnership would include:

  • Stabilization of the enterprise infrastructure through Change Management and Best Practice adoption
  • Implementation of IT roadmap and modernization that included a new EMR platform
  • Greater control of IT cost as a percentage of total revenue that would generate cost savings
  • Business stakeholders prioritize IT initiatives for greater focus on success that would drive greater business results

Why GuideIT

With GuideIT’s focus on healthcare expertise combined with its technology capabilities to manage a customers support requirements; a set of best practices and processes would be deployed to provide an improved result for Catalyst Health’s technology environment. GuideIT would operationalize a set of technology metrics to allow for greater transparency of performance, resiliency, and predictable results for the organization.

The best practice approach would create the foundation of operational excellence for Catalyst Health’s IT environment to achieve greater business results as well as on-time delivery and within budget. The underlying cost structure converted from a fixed to variable cost structure to support the scalability and allowed to realize a lower expense cost ratio as quality improved. Having access to critical skill sets that otherwise would be difficult to hire and retain would be of additional value to the organization.

The Implementation

GuideIT began with a consultative approach that included fully understanding the unique business model and support needs of Catalyst Health and its customers. Services were built around nine distinct areas: Infrastructure Management and Optimization, Service Desk, End User Field Support, Clinical Applications Support, Project Management, Vendor Management, Invoice Management, Security Enhancement, and Clinic Support.

1. Service Desk Management - Stakeholders identified the need to implement a more robust service desk that would aid in first call resolution for internal and external customers.
2. Infrastructure Management Transition - As the business grew, the need to support a larger, more diverse and scalable technology portfolio emerged. GuideIT assessed the environment and identified areas for immediate remediation. These included infrastructure standards procedures and performance management solutions were implemented to optimize the current exiting technology. As a part of this transition, GuideIT transitioned existing customer IT staff and filled identified gaps in skill sets with additional resources.
3. Expansion of Infrastructure Support - With continued growth and dependency on technology, Catalyst Health expanded the relationship to include 24x7 Service Desk, Clinical Applications Service Desk, and project management. This expanded scope allowed for greater end-to-end problem resolution.
4. ENHANCEMENTS TO SUPPORT TODAYS ENVIRONMENT - The events of the Pandemic in 2020 brought about new challenges and new solutions. In partnership with Catalyst Health, GuideIT responded with solutions for remote work, remote support, COVID 19 Hotline and most recently a Pharmacy Call Center.

The Results

  • Improved operational performance of IT systems with improved system availability
  • Seamless integration with the business departments to function as one-team
  • Improved IT solutions and responsiveness to the business
  • Improved efficiency cost ratios for the organization during a high growth period
  • Ability to support increased IT demand with a variable cost structure
Regional Health System to Accelerate Information Flow and Automate Back Office Processes through GuideIT

April 25, 2019 – Plano, TX – GuideIT today announced it signed a new contract to provide business intelligence solutions for a regional health system.

With the objectives of accelerating information flow and optimizing back-office processes, the health system launched an initiative to replace manual reporting that requires information from multiple sources, including its EMR.  GuideIT will integrate critical data sources into a common platform, apply business logic and develop the visualizations necessary to meet the health system’s management objectives.

“In healthcare, there is an opportunity to strengthen patient care and operating performance through greater and more timely access to information,” said Chuck Lyles, CEO for GuideIT. “Healthcare providers have more information about their patients and businesses than ever before.  At GuideIT, our healthcare and data specialists help healthcare providers leverage this information to produce tangible business accomplishments.”

GuideIT Digital Business solutions, which incorporate Digital Transformation, Business Intelligence and Digital Workplace, help organizations to operate more efficiently, convert ideas for creating new business value a reality, and facilitate a dynamic, anytime-anyplace business environment.

About GuideIT

GuideIT provides IT services that make technology contribute to business success. Through its consulting, managed IT, digital business, and cyber security solutions and the way it partners with customers, simplifies the complex, and inspires confidence, GuideIT utilizes technology in an industry context to enable the creation of business value and create an IT experience that delivers. Founded in 2013 and part of a heritage that dates to the industry’s founding, GuideIT has been recognized for its service quality, positive work environment and growth. More information is available at www.guideit.com.

Risk and Security Management Solutions Provider Modernizes Go-To-Market Application

A leading provider of risk and security management solutions needed to re-write and modernize its core go-to-market application.  GuideIT collaborated with the organization on defining its business requirements and developed the new application utilizing a hybrid agile/waterfall development method and continues to enhance the product leveraging agile sprint and release cycles.  The application, with its modern interface, and improved features and functionality, helped the customer expand their subscriber base by more than 95% in a 20-month period.

How to Protect Your Business From the Growing Complexity of Email-Based Security Attacks

The Threat Landscape

Organizations face a growing frequency and complexity of email-based security threats as the predominance of targeted attacks begin with an email. Advanced malware delivery, phishing and domain and identity spoofing can penetrate the primary layer of security provided as part of the email service and damage your business. With the increasing complexity of attacks, relying solely upon base security features and employee training is no longer adequate. Additionally, the types of organizations receiving these email attacks is expanding to include not only large and well-known businesses, but also small businesses because of a perception that there will be fewer security layers.

Our Approach

With GuideIT Advanced Email Protection you receive the extra security necessary to address this growing threat. We provide a service configurable to the level of protection you seek that is priced on a variable, per mailbox basis. Based on the requirements established, which encompass the level of protection, filter rules and user parameters, we implement and operate the advanced protection, while also providing you visibility into the threat environment and actions to protect your business.

How It Works

We implement a protective shield monitored by security experts in which all email traffic is routed through. Inbound messages are checked against know fraudulent and dangerous URL’s and email addresses, while attachments are scanned for malware. When an incoming email is flagged, it is blocked, quarantined and the GuideIT security team notified. We then work with your team to revise the protective rules as necessary for your business. All outbound messages are scanned to ensure that Personally Identifiable Information (PII) and Protected Health Information (PHI) do not leave the organization accidentally or maliciously.

Read next: How to Protect Your End User Devices from COVID-19 Phishing Attacks

How You Will Benefit

Through our Advanced Email Protection solution, you will realize:

  • Greater protection from advanced email threats
  • Increased visibility into the threats being experienced
  • Enhanced email encryption and data loss prevention
  • Extended protection to social media accounts
  • Better compliance and discovery readiness

Contact us to get started today.