Case Snapshots

See how GuideIT has helped companies achieve their business goals.


Slider
No post found
No post found
No post found
No post found
No post found
No post found
No post found
Linux KVM: Bridging a Bond on CentOS 6.5

Today we are going to hop back into the KVM fray, and take a  look at using CentOS as a hypervisor., and configuring very resilient network connections to support our guests.  Of course these instructions should be valid on Red Hat Linux and Oracle Linux as well, though there is a little more to be done around getting access to the repos on those distributions…

Enable Bonding

I am assuming this is a first build for you, so this step might not be applicable, but it won’t hurt anything.

# modprobe --first-time bonding

Configure the Physical Interfaces

In our example we will be using two physical interfaces, eth0 and eth1.  Here are the interface configuration files.

# cat /etc/sysconfig/network-scripts/ifcfg-eth0<br />

DEVICE=eth0<br />

HWADDR=XX:XX:XX:XX:XX:XX<br />

ONBOOT=yes<br />

NM_CONTROLLED=no<br />

BOOTPROTO=none<br />

MASTER=bond0<br />

SLAVE=yes<br />

USERCTL=no

# cat /etc/sysconfig/network-scripts/ifcfg-eth1<br />

DEVICE=eth1<br />

HWADDR=XX:XX:XX:XX:XX:XX<br />

ONBOOT=yes<br />

NM_CONTROLLED=no<br />

BOOTPROTO=none<br />

MASTER=bond0<br />

SLAVE=yes<br />

USERCTL=no

Configure the Bonded Interface

Here we are going to bond the interfaces together, which will increase the resiliency of the interface.

# cat /etc/sysconfig/network-scripts/ifcfg-bond0<br />

DEVICE=bond0<br />

ONBOOT=yes<br />

NM_CONTROLLED=no<br />

BOOTPROTO=none<br />

USERCTL=no<br />

BONDING_OPTS=&quot;mode=1 miimon=100&quot;<br />

BRIDGE=br0

Configure the Bridge

The final step is to configure the bridge itself, which is what KVM creates the vNIC on to allow for guest network communication.

# cat /etc/sysconfig/network-scripts/ifcfg-br0<br />

DEVICE=br0<br />

TYPE=Bridge<br />

ONBOOT=yes<br />

NM_CONTROLLED=no<br />

BOOTPROTO=none<br />

USERCTL=no<br />

IPADDR=192.168.1.10<br />

NETMASK=255.255.255.0<br />

GATEWAY=192.168.1.1<br />

DELAY=0

Service Restart

Finally the easy part.  Now one snag I ran into.  If you created IP addresses on bond0, then you will have a tough time getting rid of that with a service restart alone.  I found it was easier to reboot the box itself.

# service network restart

BlackBerry OS 10: Caldav Setup with Zimbra

I have owned my Blackberry Z10, going on a year now, and I have absolutely loved it.  However, one of the issues that I have fought was in integrating it with my Zimbra Installation.  Email was easy, the IMAP protocol sorted that out easily enough… However, calendars turned out to be more of a challenge than I expected.

Here is the versions that I validated these steps on.

  • Blackberry Z10 with 10.2.1.2977
  • Zimbra Collaboration Server 8.5.0

Here is how to get it done.

Figure 1-1 – System Settings

Figure 1-1 gets us started, I am assuming that you know how to find the settings on BB10, but once there go into the Accounts section.

Figure 1-2 – Accounts

Figure 1-2 is a listing of all of the existing accounts, with mine obfuscated, of course, however, we are going to be adding another one, so we select Add Account.

Figure 1-3 – Add Accounts

You can see above in Figure 1-3, that we don’t use the “Subscribed Calendar” selection, but instead go to Advanced.  When I used Subscribed Calendar, it was never able to successfully perform a synchronization.

Figure 1-4 – Advanced Setup

In Figure 1-4 we are selecting CalDAV as the type of Account to use.  Also a little footnote, I was unable to get CardDAV working. I will provide an update or another article if I find a way around this.

Figure 1-5 – CalDAV Settings

In Figure 1-5 we are populating all of the information needed to make a connection.  Please keep in mind, that we need to use user@domain.tld for the username, and the Server Address should be in the following format:  https://zimbra.domain.tld/dav/user@domain.tld/Calendar. The important bits here are (1) https – I suspect http works as well, but I did not validate (2) username – the username is a component of the URI, this makes it a little tough to implement for less sophisticated users (3) Calendar – the default calendar for all Zimbra users is named “Calendar” – with a capital “C” not sure if you can have calendars with other names, but this is the name needed for most situations.

Now set your password and sync interval and you should be ready to go.

IT Trends, Change and The Future…A Conversation With an Industry Veteran

As a technology and healthcare centric marketing firm, we at illumeture work with emerging companies in achieving more right conversations with right people. Part of that work comes in learning and sharing the thought leadership and subject matter expertise of our clients with the right audiences. Mark Johnson is Vice President with GuideIT responsible for Account Operations and Delivery.  Prior to joining GuideIT, Mark spent 23 years with Perot Systems and Dell, the last 6 years leading business development teams tasked with solutioning, negotiating and closing large healthcare IT services contracts.  We sat down with Mark for his perspective on what CIOs should be thinking about today. 

Q:  You believe that a number of fundamental changes are affecting how CIOs should be thinking about both how they consume and deliver IT services – can you explain?

A:  Sure.  At a high level, start with the growing shift from sole-source IT services providers to more of a multi-sourcing model.  A model in which CIOs ensure they have the flexibility to choose among a variety of application and services providers, while maintaining the ability to retain those functions that make sense for a strategic or financial reason.  The old sourcing model was often binary, you either retained the service or gave it to your IT outsourcing vendor.  Today’s environment demands a third option:  the multi-source approach, or what we at GuideIT call “Flex-Sourcing”.

Q:  What’s driving that demand?

A:  A number of trends, some of which are industry specific.  But two that cross all industries are the proliferation of Software as a Service in the market, and cloud computing moving from infancy to adolescence.

Q:  Software as a Service isn’t new.

A:  No it isn’t.  But we’re moving from early adopters like salesforce.com to an environment where new application providers are developing exclusively for the cloud, and existing providers are executing to a roadmap to get there.  And not just business applications; hosted PBX is a great example of what used to be local infrastructure moving to a SaaS model in the cloud.  Our service desk telephony is hosted by one of our partners – OneSource, and we’re working closely with them to bring hosted PBX to our customers.  E-mail is another great example.  In the past I’d tee up email as a service to customers, usually either Gmail or Office365, but rarely got traction.  Now you see organizations looking hard at either a 100% SaaS approach for email, or in the case of Exchange, a hybrid model where organizations classify their users, with less frequent users in the cloud, and super-users hosted locally.  GuideIT uses Office365 exclusively, yet I still have thick-client Outlook on my PC and the OWA application on both my iPhone and Windows tablet.  That wasn’t the case not all that long ago and I think we take that for granted.

Q:  And you think cloud computing is growing up?

A:  Well it’s still in grade school, but yes, absolutely.  Let’s look at what’s happened in just a few short years, specifically with market leaders such as Amazon, Microsoft and Google.  We’ve gone from an environment of apprehension, with organizations often limiting use of these services for development and test environments, to leading application vendors running mission critical applications in the cloud, and being comfortable with both the performance/availability and the security of those environments.  On top of that, these industry leaders are, if you’ll excuse the comparison, literally at war with each other to drive down cost, directly benefiting their customers.  We’re a good ways away from a large organization being able to run 100% in the cloud, but the shift is on.  CIOs have to ensure they are challenging the legacy model and positioning their organizations to benefit from both the performance and flexibility of these environments, but just as importantly the cost. 

Q:  How do they do that?

A:  A good place to start is an end to end review of their infrastructure and application strategy to produce a roadmap that positions their organization to ride this wave, not be left behind carrying the burden of legacy investments.  Timing is critical; the pace of change in IT today is far more rapid than the old mainframe or client-server days and this process takes planning.  That said, this analysis should not be just about a multi-year road-map.  The right partner should be able to make recommendations around tactical initiatives, the so-called “low-hanging fruit” that will generate immediate cost savings, and help fund your future initiatives.  Second, is to be darn sure you don’t lock yourself into long-term contracts with hosting providers, or if you do ensure you retain contractual flexibility that goes well beyond contract bench-marking.  You have to protect yourself from the contracting model where vendors present your pricing in an “as a service” model, but are really just depreciating capital purchased on your behalf in the background.  You might meet your short-term financial objectives, but I promise in short order you’ll realize you left money on the table.  At Guide IT we’re so confident in what we can deliver that if a CIO engages GuideIT for an enterprise assessment, and isn’t happy with the results, they don’t pay.

Q:  You’ve spent half your career in healthcare – how do you see these trends you’ve discussed affecting the continuity of care model?

A:  Well we could chat about just that topic for quite some time.  My “ah-ha moments” tend to come from personal experience.  I’ll give you two examples.  Recently I started wearing a FitBit that syncs with my iPhone.  On a good day, the device validates my daily physical activity; but to be honest, too often reminds me that I need to do a better job of making exercise a mandatory part of my day.  Today that data is only on my smartphone – tomorrow it could be with my family physician, in my PHR, or even with my insurer to validate wellness premium discounts.  The “internet of things” is here and you just know these activity devices are the tip of the iceberg.  Your infrastructure and strategy roadmap have to be flexible enough to meet today’s requirements, but also support what we all know is coming, and in many cases what we don’t know is coming.  Today’s environment reminds me of the early thin client days that placed a premium on adopting a services-oriented architecture.

Second is my experience with the DNA sequencing service 23andme.com.  I found my health and ancestry data fascinating, and though the FDA has temporarily shut down the health data portion of the service, there will come a day very soon that we’ll view the practice of medicine without genome data as akin to the days without antibiotics and MRIs.  Just as they are doing with the EMR Adoption Model, CIOs should ask themselves where they’re at on the Healthcare Analytics Adoption Model and what their plan is to move to the advanced stages - the ones beyond reimbursement.  A customer of mine remarked the other day that what’s critical about the approach to analytics is not “what is the answer?” but rather “what is the question?”  And he’s right.

Voyage Linux: Dialog Error with Apt

This can happen on other Linux distributions, however, in this case, I found it on Voyage Linux, which is a Linux distribution for embedded hardware.

The Error

Here we are dealing with an annoyance whenever you use apt-get or aptitude.

debconf: unable to initialize frontend: Dialog<br />

debconf: (No usable dialog-like program is installed, so the dialog-based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76, &lt;&gt; line 1.)<br />

debconf: falling back to frontend: Readline

The Fix

Simply install dialog, which is the package it is not finding.  This will no longer need the failback to readline.

# apt-get install dialog

Once the dialog package has been installed the issue will no longer occur on subsequent runs of apt-get or aptitude.

Voyage Linux: Locale Error with Apt

Voyage Linux is an embedded linux distribution.  I use it on some ALIX boards I have lying around, it is very stripped down, and as such there are a few annoyances which we have to fix.

The Error

This issue happens when attempting to install/upgrade packages using apt-get or aptitude.

perl: warning: Setting locale failed.<br />

perl: warning: Please check that your locale settings:<br />

    LANGUAGE = (unset),<br />

    LC_ALL = (unset),<br />

    LANG = &quot;en_US.utf8&quot;<br />

are supported and installed on your system.<br />

perl: warning: Falling back to the standard locale (&quot;C&quot;).

The Fix

We simply need to set the locales to use en_US.UTF-8 or whichever locale is correct for your situation.

# locale-gen --purge en_US.UTF-8<br />

# echo &quot;LANG=en_US.UTF-8&quot; &gt;&gt; /etc/default/locale<br />

# update-locale

Now subsequent runs of apt-get or aptitude will no longer generate the error.

Adventures in ZFS: Splitting a Zpool
SQL Developer Crash on Fedora 20

I ran into a painful issue on Fedora 20 with SQL Developer.  Basically every time it was launched via the shortcut it would go through loading, and then disappear.

Manual Invocation of SQL Developer

When launching it via the script itself it gives us a little more information.

$ /opt/sqldeveloper/sqldeveloper.sh</p>

<p>Oracle SQL Developer<br />

Copyright (c) 1997, 2013, Oracle and/or its affiliates. All rights reserved.</p>

<p>&amp;nbsp;</p>

<p>LOAD TIME : 279#<br />

# A fatal error has been detected by the Java Runtime Environment:<br />

#<br />

# SIGSEGV (0xb) at pc=0x00000038a1e64910, pid=12726, tid=140449865832192<br />

#<br />

# JRE version: Java(TM) SE Runtime Environment (7.0_40-b43) (build 1.7.0_40-b43)<br />

# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.0-b56 mixed mode linux-amd64 compressed oops)<br />

# Problematic frame:<br />

# C 0x00000038a1e64910<br />

#<br />

# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try &quot;ulimit -c unlimited&quot; before starting Java again<br />

#<br />

# An error report file with more information is saved as:<br />

# /opt/sqldeveloper/sqldeveloper/bin/hs_err_pid12726.log<br />

[thread 140449881597696 also had an error]<br />

#<br />

# If you would like to submit a bug report, please visit:<br />

# http://bugreport.sun.com/bugreport/crash.jsp<br />

#<br />

/opt/sqldeveloper/sqldeveloper/bin/../../ide/bin/launcher.sh: line 611: 12726 Aborted (core dumped) ${JAVA} &quot;${APP_VM_OPTS[@]}&quot; ${APP_ENV_VARS} -classpath ${APP_CLASSPATH} ${APP_MAIN_CLASS} &quot;${APP_APP_OPTS[@]}&quot;

I also noticed, that while executing as root it worked.  However that clearly isn’t the “solution”

Fixing the Problem

Here we need to remove the GNOME_DESKTOP_SESSION_ID as part of the script.

$ cat /opt/sqldeveloper/sqldeveloper.sh<br />

#!/bin/bash<br />

unset -v GNOME_DESKTOP_SESSION_ID<br />

cd &quot;`dirname $0`&quot;/sqldeveloper/bin &amp;&amp; bash sqldeveloper $*

Once this was completed, SQL Developer launched clean for me.

 

Linux KVM: Bridging a Bond on CentOS 6.5

Today we are going to hop back into the KVM fray, and take a  look at using CentOS as a hypervisor., and configuring very resilient network connections to support our guests.  Of course these instructions should be valid on Red Hat Linux and Oracle Linux as well, though there is a little more to be done around getting access to the repos on those distributions…

Enable Bonding

I am assuming this is a first build for you, so this step might not be applicable, but it won’t hurt anything.

# modprobe --first-time bonding

Configure the Physical Interfaces

In our example we will be using two physical interfaces, eth0 and eth1.  Here are the interface configuration files.

# cat /etc/sysconfig/network-scripts/ifcfg-eth0<br />

DEVICE=eth0<br />

HWADDR=XX:XX:XX:XX:XX:XX<br />

ONBOOT=yes<br />

NM_CONTROLLED=no<br />

BOOTPROTO=none<br />

MASTER=bond0<br />

SLAVE=yes<br />

USERCTL=no

# cat /etc/sysconfig/network-scripts/ifcfg-eth1<br />

DEVICE=eth1<br />

HWADDR=XX:XX:XX:XX:XX:XX<br />

ONBOOT=yes<br />

NM_CONTROLLED=no<br />

BOOTPROTO=none<br />

MASTER=bond0<br />

SLAVE=yes<br />

USERCTL=no

Configure the Bonded Interface

Here we are going to bond the interfaces together, which will increase the resiliency of the interface.

# cat /etc/sysconfig/network-scripts/ifcfg-bond0<br />

DEVICE=bond0<br />

ONBOOT=yes<br />

NM_CONTROLLED=no<br />

BOOTPROTO=none<br />

USERCTL=no<br />

BONDING_OPTS=&quot;mode=1 miimon=100&quot;<br />

BRIDGE=br0

Configure the Bridge

The final step is to configure the bridge itself, which is what KVM creates the vNIC on to allow for guest network communication.

# cat /etc/sysconfig/network-scripts/ifcfg-br0<br />

DEVICE=br0<br />

TYPE=Bridge<br />

ONBOOT=yes<br />

NM_CONTROLLED=no<br />

BOOTPROTO=none<br />

USERCTL=no<br />

IPADDR=192.168.1.10<br />

NETMASK=255.255.255.0<br />

GATEWAY=192.168.1.1<br />

DELAY=0

Service Restart

Finally the easy part.  Now one snag I ran into.  If you created IP addresses on bond0, then you will have a tough time getting rid of that with a service restart alone.  I found it was easier to reboot the box itself.

# service network restart

BlackBerry OS 10: Caldav Setup with Zimbra

I have owned my Blackberry Z10, going on a year now, and I have absolutely loved it.  However, one of the issues that I have fought was in integrating it with my Zimbra Installation.  Email was easy, the IMAP protocol sorted that out easily enough… However, calendars turned out to be more of a challenge than I expected.

Here is the versions that I validated these steps on.

  • Blackberry Z10 with 10.2.1.2977
  • Zimbra Collaboration Server 8.5.0

Here is how to get it done.

Figure 1-1 – System Settings

Figure 1-1 gets us started, I am assuming that you know how to find the settings on BB10, but once there go into the Accounts section.

Figure 1-2 – Accounts

Figure 1-2 is a listing of all of the existing accounts, with mine obfuscated, of course, however, we are going to be adding another one, so we select Add Account.

Figure 1-3 – Add Accounts

You can see above in Figure 1-3, that we don’t use the “Subscribed Calendar” selection, but instead go to Advanced.  When I used Subscribed Calendar, it was never able to successfully perform a synchronization.

Figure 1-4 – Advanced Setup

In Figure 1-4 we are selecting CalDAV as the type of Account to use.  Also a little footnote, I was unable to get CardDAV working. I will provide an update or another article if I find a way around this.

Figure 1-5 – CalDAV Settings

In Figure 1-5 we are populating all of the information needed to make a connection.  Please keep in mind, that we need to use user@domain.tld for the username, and the Server Address should be in the following format:  https://zimbra.domain.tld/dav/user@domain.tld/Calendar. The important bits here are (1) https – I suspect http works as well, but I did not validate (2) username – the username is a component of the URI, this makes it a little tough to implement for less sophisticated users (3) Calendar – the default calendar for all Zimbra users is named “Calendar” – with a capital “C” not sure if you can have calendars with other names, but this is the name needed for most situations.

Now set your password and sync interval and you should be ready to go.

IT Trends, Change and The Future…A Conversation With an Industry Veteran

As a technology and healthcare centric marketing firm, we at illumeture work with emerging companies in achieving more right conversations with right people. Part of that work comes in learning and sharing the thought leadership and subject matter expertise of our clients with the right audiences. Mark Johnson is Vice President with GuideIT responsible for Account Operations and Delivery.  Prior to joining GuideIT, Mark spent 23 years with Perot Systems and Dell, the last 6 years leading business development teams tasked with solutioning, negotiating and closing large healthcare IT services contracts.  We sat down with Mark for his perspective on what CIOs should be thinking about today. 

Q:  You believe that a number of fundamental changes are affecting how CIOs should be thinking about both how they consume and deliver IT services – can you explain?

A:  Sure.  At a high level, start with the growing shift from sole-source IT services providers to more of a multi-sourcing model.  A model in which CIOs ensure they have the flexibility to choose among a variety of application and services providers, while maintaining the ability to retain those functions that make sense for a strategic or financial reason.  The old sourcing model was often binary, you either retained the service or gave it to your IT outsourcing vendor.  Today’s environment demands a third option:  the multi-source approach, or what we at GuideIT call “Flex-Sourcing”.

Q:  What’s driving that demand?

A:  A number of trends, some of which are industry specific.  But two that cross all industries are the proliferation of Software as a Service in the market, and cloud computing moving from infancy to adolescence.

Q:  Software as a Service isn’t new.

A:  No it isn’t.  But we’re moving from early adopters like salesforce.com to an environment where new application providers are developing exclusively for the cloud, and existing providers are executing to a roadmap to get there.  And not just business applications; hosted PBX is a great example of what used to be local infrastructure moving to a SaaS model in the cloud.  Our service desk telephony is hosted by one of our partners – OneSource, and we’re working closely with them to bring hosted PBX to our customers.  E-mail is another great example.  In the past I’d tee up email as a service to customers, usually either Gmail or Office365, but rarely got traction.  Now you see organizations looking hard at either a 100% SaaS approach for email, or in the case of Exchange, a hybrid model where organizations classify their users, with less frequent users in the cloud, and super-users hosted locally.  GuideIT uses Office365 exclusively, yet I still have thick-client Outlook on my PC and the OWA application on both my iPhone and Windows tablet.  That wasn’t the case not all that long ago and I think we take that for granted.

Q:  And you think cloud computing is growing up?

A:  Well it’s still in grade school, but yes, absolutely.  Let’s look at what’s happened in just a few short years, specifically with market leaders such as Amazon, Microsoft and Google.  We’ve gone from an environment of apprehension, with organizations often limiting use of these services for development and test environments, to leading application vendors running mission critical applications in the cloud, and being comfortable with both the performance/availability and the security of those environments.  On top of that, these industry leaders are, if you’ll excuse the comparison, literally at war with each other to drive down cost, directly benefiting their customers.  We’re a good ways away from a large organization being able to run 100% in the cloud, but the shift is on.  CIOs have to ensure they are challenging the legacy model and positioning their organizations to benefit from both the performance and flexibility of these environments, but just as importantly the cost. 

Q:  How do they do that?

A:  A good place to start is an end to end review of their infrastructure and application strategy to produce a roadmap that positions their organization to ride this wave, not be left behind carrying the burden of legacy investments.  Timing is critical; the pace of change in IT today is far more rapid than the old mainframe or client-server days and this process takes planning.  That said, this analysis should not be just about a multi-year road-map.  The right partner should be able to make recommendations around tactical initiatives, the so-called “low-hanging fruit” that will generate immediate cost savings, and help fund your future initiatives.  Second, is to be darn sure you don’t lock yourself into long-term contracts with hosting providers, or if you do ensure you retain contractual flexibility that goes well beyond contract bench-marking.  You have to protect yourself from the contracting model where vendors present your pricing in an “as a service” model, but are really just depreciating capital purchased on your behalf in the background.  You might meet your short-term financial objectives, but I promise in short order you’ll realize you left money on the table.  At Guide IT we’re so confident in what we can deliver that if a CIO engages GuideIT for an enterprise assessment, and isn’t happy with the results, they don’t pay.

Q:  You’ve spent half your career in healthcare – how do you see these trends you’ve discussed affecting the continuity of care model?

A:  Well we could chat about just that topic for quite some time.  My “ah-ha moments” tend to come from personal experience.  I’ll give you two examples.  Recently I started wearing a FitBit that syncs with my iPhone.  On a good day, the device validates my daily physical activity; but to be honest, too often reminds me that I need to do a better job of making exercise a mandatory part of my day.  Today that data is only on my smartphone – tomorrow it could be with my family physician, in my PHR, or even with my insurer to validate wellness premium discounts.  The “internet of things” is here and you just know these activity devices are the tip of the iceberg.  Your infrastructure and strategy roadmap have to be flexible enough to meet today’s requirements, but also support what we all know is coming, and in many cases what we don’t know is coming.  Today’s environment reminds me of the early thin client days that placed a premium on adopting a services-oriented architecture.

Second is my experience with the DNA sequencing service 23andme.com.  I found my health and ancestry data fascinating, and though the FDA has temporarily shut down the health data portion of the service, there will come a day very soon that we’ll view the practice of medicine without genome data as akin to the days without antibiotics and MRIs.  Just as they are doing with the EMR Adoption Model, CIOs should ask themselves where they’re at on the Healthcare Analytics Adoption Model and what their plan is to move to the advanced stages - the ones beyond reimbursement.  A customer of mine remarked the other day that what’s critical about the approach to analytics is not “what is the answer?” but rather “what is the question?”  And he’s right.

Voyage Linux: Dialog Error with Apt

This can happen on other Linux distributions, however, in this case, I found it on Voyage Linux, which is a Linux distribution for embedded hardware.

The Error

Here we are dealing with an annoyance whenever you use apt-get or aptitude.

debconf: unable to initialize frontend: Dialog<br />

debconf: (No usable dialog-like program is installed, so the dialog-based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76, &lt;&gt; line 1.)<br />

debconf: falling back to frontend: Readline

The Fix

Simply install dialog, which is the package it is not finding.  This will no longer need the failback to readline.

# apt-get install dialog

Once the dialog package has been installed the issue will no longer occur on subsequent runs of apt-get or aptitude.

Voyage Linux: Locale Error with Apt

Voyage Linux is an embedded linux distribution.  I use it on some ALIX boards I have lying around, it is very stripped down, and as such there are a few annoyances which we have to fix.

The Error

This issue happens when attempting to install/upgrade packages using apt-get or aptitude.

perl: warning: Setting locale failed.<br />

perl: warning: Please check that your locale settings:<br />

    LANGUAGE = (unset),<br />

    LC_ALL = (unset),<br />

    LANG = &quot;en_US.utf8&quot;<br />

are supported and installed on your system.<br />

perl: warning: Falling back to the standard locale (&quot;C&quot;).

The Fix

We simply need to set the locales to use en_US.UTF-8 or whichever locale is correct for your situation.

# locale-gen --purge en_US.UTF-8<br />

# echo &quot;LANG=en_US.UTF-8&quot; &gt;&gt; /etc/default/locale<br />

# update-locale

Now subsequent runs of apt-get or aptitude will no longer generate the error.

Adventures in ZFS: Splitting a Zpool
SQL Developer Crash on Fedora 20

I ran into a painful issue on Fedora 20 with SQL Developer.  Basically every time it was launched via the shortcut it would go through loading, and then disappear.

Manual Invocation of SQL Developer

When launching it via the script itself it gives us a little more information.

$ /opt/sqldeveloper/sqldeveloper.sh</p>

<p>Oracle SQL Developer<br />

Copyright (c) 1997, 2013, Oracle and/or its affiliates. All rights reserved.</p>

<p>&amp;nbsp;</p>

<p>LOAD TIME : 279#<br />

# A fatal error has been detected by the Java Runtime Environment:<br />

#<br />

# SIGSEGV (0xb) at pc=0x00000038a1e64910, pid=12726, tid=140449865832192<br />

#<br />

# JRE version: Java(TM) SE Runtime Environment (7.0_40-b43) (build 1.7.0_40-b43)<br />

# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.0-b56 mixed mode linux-amd64 compressed oops)<br />

# Problematic frame:<br />

# C 0x00000038a1e64910<br />

#<br />

# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try &quot;ulimit -c unlimited&quot; before starting Java again<br />

#<br />

# An error report file with more information is saved as:<br />

# /opt/sqldeveloper/sqldeveloper/bin/hs_err_pid12726.log<br />

[thread 140449881597696 also had an error]<br />

#<br />

# If you would like to submit a bug report, please visit:<br />

# http://bugreport.sun.com/bugreport/crash.jsp<br />

#<br />

/opt/sqldeveloper/sqldeveloper/bin/../../ide/bin/launcher.sh: line 611: 12726 Aborted (core dumped) ${JAVA} &quot;${APP_VM_OPTS[@]}&quot; ${APP_ENV_VARS} -classpath ${APP_CLASSPATH} ${APP_MAIN_CLASS} &quot;${APP_APP_OPTS[@]}&quot;

I also noticed, that while executing as root it worked.  However that clearly isn’t the “solution”

Fixing the Problem

Here we need to remove the GNOME_DESKTOP_SESSION_ID as part of the script.

$ cat /opt/sqldeveloper/sqldeveloper.sh<br />

#!/bin/bash<br />

unset -v GNOME_DESKTOP_SESSION_ID<br />

cd &quot;`dirname $0`&quot;/sqldeveloper/bin &amp;&amp; bash sqldeveloper $*

Once this was completed, SQL Developer launched clean for me.

 

Linux KVM: Bridging a Bond on CentOS 6.5

Today we are going to hop back into the KVM fray, and take a  look at using CentOS as a hypervisor., and configuring very resilient network connections to support our guests.  Of course these instructions should be valid on Red Hat Linux and Oracle Linux as well, though there is a little more to be done around getting access to the repos on those distributions…

Enable Bonding

I am assuming this is a first build for you, so this step might not be applicable, but it won’t hurt anything.

# modprobe --first-time bonding

Configure the Physical Interfaces

In our example we will be using two physical interfaces, eth0 and eth1.  Here are the interface configuration files.

# cat /etc/sysconfig/network-scripts/ifcfg-eth0<br />

DEVICE=eth0<br />

HWADDR=XX:XX:XX:XX:XX:XX<br />

ONBOOT=yes<br />

NM_CONTROLLED=no<br />

BOOTPROTO=none<br />

MASTER=bond0<br />

SLAVE=yes<br />

USERCTL=no

# cat /etc/sysconfig/network-scripts/ifcfg-eth1<br />

DEVICE=eth1<br />

HWADDR=XX:XX:XX:XX:XX:XX<br />

ONBOOT=yes<br />

NM_CONTROLLED=no<br />

BOOTPROTO=none<br />

MASTER=bond0<br />

SLAVE=yes<br />

USERCTL=no

Configure the Bonded Interface

Here we are going to bond the interfaces together, which will increase the resiliency of the interface.

# cat /etc/sysconfig/network-scripts/ifcfg-bond0<br />

DEVICE=bond0<br />

ONBOOT=yes<br />

NM_CONTROLLED=no<br />

BOOTPROTO=none<br />

USERCTL=no<br />

BONDING_OPTS=&quot;mode=1 miimon=100&quot;<br />

BRIDGE=br0

Configure the Bridge

The final step is to configure the bridge itself, which is what KVM creates the vNIC on to allow for guest network communication.

# cat /etc/sysconfig/network-scripts/ifcfg-br0<br />

DEVICE=br0<br />

TYPE=Bridge<br />

ONBOOT=yes<br />

NM_CONTROLLED=no<br />

BOOTPROTO=none<br />

USERCTL=no<br />

IPADDR=192.168.1.10<br />

NETMASK=255.255.255.0<br />

GATEWAY=192.168.1.1<br />

DELAY=0

Service Restart

Finally the easy part.  Now one snag I ran into.  If you created IP addresses on bond0, then you will have a tough time getting rid of that with a service restart alone.  I found it was easier to reboot the box itself.

# service network restart

BlackBerry OS 10: Caldav Setup with Zimbra

I have owned my Blackberry Z10, going on a year now, and I have absolutely loved it.  However, one of the issues that I have fought was in integrating it with my Zimbra Installation.  Email was easy, the IMAP protocol sorted that out easily enough… However, calendars turned out to be more of a challenge than I expected.

Here is the versions that I validated these steps on.

  • Blackberry Z10 with 10.2.1.2977
  • Zimbra Collaboration Server 8.5.0

Here is how to get it done.

Figure 1-1 – System Settings

Figure 1-1 gets us started, I am assuming that you know how to find the settings on BB10, but once there go into the Accounts section.

Figure 1-2 – Accounts

Figure 1-2 is a listing of all of the existing accounts, with mine obfuscated, of course, however, we are going to be adding another one, so we select Add Account.

Figure 1-3 – Add Accounts

You can see above in Figure 1-3, that we don’t use the “Subscribed Calendar” selection, but instead go to Advanced.  When I used Subscribed Calendar, it was never able to successfully perform a synchronization.

Figure 1-4 – Advanced Setup

In Figure 1-4 we are selecting CalDAV as the type of Account to use.  Also a little footnote, I was unable to get CardDAV working. I will provide an update or another article if I find a way around this.

Figure 1-5 – CalDAV Settings

In Figure 1-5 we are populating all of the information needed to make a connection.  Please keep in mind, that we need to use user@domain.tld for the username, and the Server Address should be in the following format:  https://zimbra.domain.tld/dav/user@domain.tld/Calendar. The important bits here are (1) https – I suspect http works as well, but I did not validate (2) username – the username is a component of the URI, this makes it a little tough to implement for less sophisticated users (3) Calendar – the default calendar for all Zimbra users is named “Calendar” – with a capital “C” not sure if you can have calendars with other names, but this is the name needed for most situations.

Now set your password and sync interval and you should be ready to go.

IT Trends, Change and The Future…A Conversation With an Industry Veteran

As a technology and healthcare centric marketing firm, we at illumeture work with emerging companies in achieving more right conversations with right people. Part of that work comes in learning and sharing the thought leadership and subject matter expertise of our clients with the right audiences. Mark Johnson is Vice President with GuideIT responsible for Account Operations and Delivery.  Prior to joining GuideIT, Mark spent 23 years with Perot Systems and Dell, the last 6 years leading business development teams tasked with solutioning, negotiating and closing large healthcare IT services contracts.  We sat down with Mark for his perspective on what CIOs should be thinking about today. 

Q:  You believe that a number of fundamental changes are affecting how CIOs should be thinking about both how they consume and deliver IT services – can you explain?

A:  Sure.  At a high level, start with the growing shift from sole-source IT services providers to more of a multi-sourcing model.  A model in which CIOs ensure they have the flexibility to choose among a variety of application and services providers, while maintaining the ability to retain those functions that make sense for a strategic or financial reason.  The old sourcing model was often binary, you either retained the service or gave it to your IT outsourcing vendor.  Today’s environment demands a third option:  the multi-source approach, or what we at GuideIT call “Flex-Sourcing”.

Q:  What’s driving that demand?

A:  A number of trends, some of which are industry specific.  But two that cross all industries are the proliferation of Software as a Service in the market, and cloud computing moving from infancy to adolescence.

Q:  Software as a Service isn’t new.

A:  No it isn’t.  But we’re moving from early adopters like salesforce.com to an environment where new application providers are developing exclusively for the cloud, and existing providers are executing to a roadmap to get there.  And not just business applications; hosted PBX is a great example of what used to be local infrastructure moving to a SaaS model in the cloud.  Our service desk telephony is hosted by one of our partners – OneSource, and we’re working closely with them to bring hosted PBX to our customers.  E-mail is another great example.  In the past I’d tee up email as a service to customers, usually either Gmail or Office365, but rarely got traction.  Now you see organizations looking hard at either a 100% SaaS approach for email, or in the case of Exchange, a hybrid model where organizations classify their users, with less frequent users in the cloud, and super-users hosted locally.  GuideIT uses Office365 exclusively, yet I still have thick-client Outlook on my PC and the OWA application on both my iPhone and Windows tablet.  That wasn’t the case not all that long ago and I think we take that for granted.

Q:  And you think cloud computing is growing up?

A:  Well it’s still in grade school, but yes, absolutely.  Let’s look at what’s happened in just a few short years, specifically with market leaders such as Amazon, Microsoft and Google.  We’ve gone from an environment of apprehension, with organizations often limiting use of these services for development and test environments, to leading application vendors running mission critical applications in the cloud, and being comfortable with both the performance/availability and the security of those environments.  On top of that, these industry leaders are, if you’ll excuse the comparison, literally at war with each other to drive down cost, directly benefiting their customers.  We’re a good ways away from a large organization being able to run 100% in the cloud, but the shift is on.  CIOs have to ensure they are challenging the legacy model and positioning their organizations to benefit from both the performance and flexibility of these environments, but just as importantly the cost. 

Q:  How do they do that?

A:  A good place to start is an end to end review of their infrastructure and application strategy to produce a roadmap that positions their organization to ride this wave, not be left behind carrying the burden of legacy investments.  Timing is critical; the pace of change in IT today is far more rapid than the old mainframe or client-server days and this process takes planning.  That said, this analysis should not be just about a multi-year road-map.  The right partner should be able to make recommendations around tactical initiatives, the so-called “low-hanging fruit” that will generate immediate cost savings, and help fund your future initiatives.  Second, is to be darn sure you don’t lock yourself into long-term contracts with hosting providers, or if you do ensure you retain contractual flexibility that goes well beyond contract bench-marking.  You have to protect yourself from the contracting model where vendors present your pricing in an “as a service” model, but are really just depreciating capital purchased on your behalf in the background.  You might meet your short-term financial objectives, but I promise in short order you’ll realize you left money on the table.  At Guide IT we’re so confident in what we can deliver that if a CIO engages GuideIT for an enterprise assessment, and isn’t happy with the results, they don’t pay.

Q:  You’ve spent half your career in healthcare – how do you see these trends you’ve discussed affecting the continuity of care model?

A:  Well we could chat about just that topic for quite some time.  My “ah-ha moments” tend to come from personal experience.  I’ll give you two examples.  Recently I started wearing a FitBit that syncs with my iPhone.  On a good day, the device validates my daily physical activity; but to be honest, too often reminds me that I need to do a better job of making exercise a mandatory part of my day.  Today that data is only on my smartphone – tomorrow it could be with my family physician, in my PHR, or even with my insurer to validate wellness premium discounts.  The “internet of things” is here and you just know these activity devices are the tip of the iceberg.  Your infrastructure and strategy roadmap have to be flexible enough to meet today’s requirements, but also support what we all know is coming, and in many cases what we don’t know is coming.  Today’s environment reminds me of the early thin client days that placed a premium on adopting a services-oriented architecture.

Second is my experience with the DNA sequencing service 23andme.com.  I found my health and ancestry data fascinating, and though the FDA has temporarily shut down the health data portion of the service, there will come a day very soon that we’ll view the practice of medicine without genome data as akin to the days without antibiotics and MRIs.  Just as they are doing with the EMR Adoption Model, CIOs should ask themselves where they’re at on the Healthcare Analytics Adoption Model and what their plan is to move to the advanced stages - the ones beyond reimbursement.  A customer of mine remarked the other day that what’s critical about the approach to analytics is not “what is the answer?” but rather “what is the question?”  And he’s right.

Voyage Linux: Dialog Error with Apt

This can happen on other Linux distributions, however, in this case, I found it on Voyage Linux, which is a Linux distribution for embedded hardware.

The Error

Here we are dealing with an annoyance whenever you use apt-get or aptitude.

debconf: unable to initialize frontend: Dialog<br />

debconf: (No usable dialog-like program is installed, so the dialog-based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76, &lt;&gt; line 1.)<br />

debconf: falling back to frontend: Readline

The Fix

Simply install dialog, which is the package it is not finding.  This will no longer need the failback to readline.

# apt-get install dialog

Once the dialog package has been installed the issue will no longer occur on subsequent runs of apt-get or aptitude.

Voyage Linux: Locale Error with Apt

Voyage Linux is an embedded linux distribution.  I use it on some ALIX boards I have lying around, it is very stripped down, and as such there are a few annoyances which we have to fix.

The Error

This issue happens when attempting to install/upgrade packages using apt-get or aptitude.

perl: warning: Setting locale failed.<br />

perl: warning: Please check that your locale settings:<br />

    LANGUAGE = (unset),<br />

    LC_ALL = (unset),<br />

    LANG = &quot;en_US.utf8&quot;<br />

are supported and installed on your system.<br />

perl: warning: Falling back to the standard locale (&quot;C&quot;).

The Fix

We simply need to set the locales to use en_US.UTF-8 or whichever locale is correct for your situation.

# locale-gen --purge en_US.UTF-8<br />

# echo &quot;LANG=en_US.UTF-8&quot; &gt;&gt; /etc/default/locale<br />

# update-locale

Now subsequent runs of apt-get or aptitude will no longer generate the error.

Adventures in ZFS: Splitting a Zpool
SQL Developer Crash on Fedora 20

I ran into a painful issue on Fedora 20 with SQL Developer.  Basically every time it was launched via the shortcut it would go through loading, and then disappear.

Manual Invocation of SQL Developer

When launching it via the script itself it gives us a little more information.

$ /opt/sqldeveloper/sqldeveloper.sh</p>

<p>Oracle SQL Developer<br />

Copyright (c) 1997, 2013, Oracle and/or its affiliates. All rights reserved.</p>

<p>&amp;nbsp;</p>

<p>LOAD TIME : 279#<br />

# A fatal error has been detected by the Java Runtime Environment:<br />

#<br />

# SIGSEGV (0xb) at pc=0x00000038a1e64910, pid=12726, tid=140449865832192<br />

#<br />

# JRE version: Java(TM) SE Runtime Environment (7.0_40-b43) (build 1.7.0_40-b43)<br />

# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.0-b56 mixed mode linux-amd64 compressed oops)<br />

# Problematic frame:<br />

# C 0x00000038a1e64910<br />

#<br />

# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try &quot;ulimit -c unlimited&quot; before starting Java again<br />

#<br />

# An error report file with more information is saved as:<br />

# /opt/sqldeveloper/sqldeveloper/bin/hs_err_pid12726.log<br />

[thread 140449881597696 also had an error]<br />

#<br />

# If you would like to submit a bug report, please visit:<br />

# http://bugreport.sun.com/bugreport/crash.jsp<br />

#<br />

/opt/sqldeveloper/sqldeveloper/bin/../../ide/bin/launcher.sh: line 611: 12726 Aborted (core dumped) ${JAVA} &quot;${APP_VM_OPTS[@]}&quot; ${APP_ENV_VARS} -classpath ${APP_CLASSPATH} ${APP_MAIN_CLASS} &quot;${APP_APP_OPTS[@]}&quot;

I also noticed, that while executing as root it worked.  However that clearly isn’t the “solution”

Fixing the Problem

Here we need to remove the GNOME_DESKTOP_SESSION_ID as part of the script.

$ cat /opt/sqldeveloper/sqldeveloper.sh<br />

#!/bin/bash<br />

unset -v GNOME_DESKTOP_SESSION_ID<br />

cd &quot;`dirname $0`&quot;/sqldeveloper/bin &amp;&amp; bash sqldeveloper $*

Once this was completed, SQL Developer launched clean for me.

 

Linux KVM: Bridging a Bond on CentOS 6.5

Today we are going to hop back into the KVM fray, and take a  look at using CentOS as a hypervisor., and configuring very resilient network connections to support our guests.  Of course these instructions should be valid on Red Hat Linux and Oracle Linux as well, though there is a little more to be done around getting access to the repos on those distributions…

Enable Bonding

I am assuming this is a first build for you, so this step might not be applicable, but it won’t hurt anything.

# modprobe --first-time bonding

Configure the Physical Interfaces

In our example we will be using two physical interfaces, eth0 and eth1.  Here are the interface configuration files.

# cat /etc/sysconfig/network-scripts/ifcfg-eth0<br />

DEVICE=eth0<br />

HWADDR=XX:XX:XX:XX:XX:XX<br />

ONBOOT=yes<br />

NM_CONTROLLED=no<br />

BOOTPROTO=none<br />

MASTER=bond0<br />

SLAVE=yes<br />

USERCTL=no

# cat /etc/sysconfig/network-scripts/ifcfg-eth1<br />

DEVICE=eth1<br />

HWADDR=XX:XX:XX:XX:XX:XX<br />

ONBOOT=yes<br />

NM_CONTROLLED=no<br />

BOOTPROTO=none<br />

MASTER=bond0<br />

SLAVE=yes<br />

USERCTL=no

Configure the Bonded Interface

Here we are going to bond the interfaces together, which will increase the resiliency of the interface.

# cat /etc/sysconfig/network-scripts/ifcfg-bond0<br />

DEVICE=bond0<br />

ONBOOT=yes<br />

NM_CONTROLLED=no<br />

BOOTPROTO=none<br />

USERCTL=no<br />

BONDING_OPTS=&quot;mode=1 miimon=100&quot;<br />

BRIDGE=br0

Configure the Bridge

The final step is to configure the bridge itself, which is what KVM creates the vNIC on to allow for guest network communication.

# cat /etc/sysconfig/network-scripts/ifcfg-br0<br />

DEVICE=br0<br />

TYPE=Bridge<br />

ONBOOT=yes<br />

NM_CONTROLLED=no<br />

BOOTPROTO=none<br />

USERCTL=no<br />

IPADDR=192.168.1.10<br />

NETMASK=255.255.255.0<br />

GATEWAY=192.168.1.1<br />

DELAY=0

Service Restart

Finally the easy part.  Now one snag I ran into.  If you created IP addresses on bond0, then you will have a tough time getting rid of that with a service restart alone.  I found it was easier to reboot the box itself.

# service network restart

BlackBerry OS 10: Caldav Setup with Zimbra

I have owned my Blackberry Z10, going on a year now, and I have absolutely loved it.  However, one of the issues that I have fought was in integrating it with my Zimbra Installation.  Email was easy, the IMAP protocol sorted that out easily enough… However, calendars turned out to be more of a challenge than I expected.

Here is the versions that I validated these steps on.

  • Blackberry Z10 with 10.2.1.2977
  • Zimbra Collaboration Server 8.5.0

Here is how to get it done.

Figure 1-1 – System Settings

Figure 1-1 gets us started, I am assuming that you know how to find the settings on BB10, but once there go into the Accounts section.

Figure 1-2 – Accounts

Figure 1-2 is a listing of all of the existing accounts, with mine obfuscated, of course, however, we are going to be adding another one, so we select Add Account.

Figure 1-3 – Add Accounts

You can see above in Figure 1-3, that we don’t use the “Subscribed Calendar” selection, but instead go to Advanced.  When I used Subscribed Calendar, it was never able to successfully perform a synchronization.

Figure 1-4 – Advanced Setup

In Figure 1-4 we are selecting CalDAV as the type of Account to use.  Also a little footnote, I was unable to get CardDAV working. I will provide an update or another article if I find a way around this.

Figure 1-5 – CalDAV Settings

In Figure 1-5 we are populating all of the information needed to make a connection.  Please keep in mind, that we need to use user@domain.tld for the username, and the Server Address should be in the following format:  https://zimbra.domain.tld/dav/user@domain.tld/Calendar. The important bits here are (1) https – I suspect http works as well, but I did not validate (2) username – the username is a component of the URI, this makes it a little tough to implement for less sophisticated users (3) Calendar – the default calendar for all Zimbra users is named “Calendar” – with a capital “C” not sure if you can have calendars with other names, but this is the name needed for most situations.

Now set your password and sync interval and you should be ready to go.

IT Trends, Change and The Future…A Conversation With an Industry Veteran

As a technology and healthcare centric marketing firm, we at illumeture work with emerging companies in achieving more right conversations with right people. Part of that work comes in learning and sharing the thought leadership and subject matter expertise of our clients with the right audiences. Mark Johnson is Vice President with GuideIT responsible for Account Operations and Delivery.  Prior to joining GuideIT, Mark spent 23 years with Perot Systems and Dell, the last 6 years leading business development teams tasked with solutioning, negotiating and closing large healthcare IT services contracts.  We sat down with Mark for his perspective on what CIOs should be thinking about today. 

Q:  You believe that a number of fundamental changes are affecting how CIOs should be thinking about both how they consume and deliver IT services – can you explain?

A:  Sure.  At a high level, start with the growing shift from sole-source IT services providers to more of a multi-sourcing model.  A model in which CIOs ensure they have the flexibility to choose among a variety of application and services providers, while maintaining the ability to retain those functions that make sense for a strategic or financial reason.  The old sourcing model was often binary, you either retained the service or gave it to your IT outsourcing vendor.  Today’s environment demands a third option:  the multi-source approach, or what we at GuideIT call “Flex-Sourcing”.

Q:  What’s driving that demand?

A:  A number of trends, some of which are industry specific.  But two that cross all industries are the proliferation of Software as a Service in the market, and cloud computing moving from infancy to adolescence.

Q:  Software as a Service isn’t new.

A:  No it isn’t.  But we’re moving from early adopters like salesforce.com to an environment where new application providers are developing exclusively for the cloud, and existing providers are executing to a roadmap to get there.  And not just business applications; hosted PBX is a great example of what used to be local infrastructure moving to a SaaS model in the cloud.  Our service desk telephony is hosted by one of our partners – OneSource, and we’re working closely with them to bring hosted PBX to our customers.  E-mail is another great example.  In the past I’d tee up email as a service to customers, usually either Gmail or Office365, but rarely got traction.  Now you see organizations looking hard at either a 100% SaaS approach for email, or in the case of Exchange, a hybrid model where organizations classify their users, with less frequent users in the cloud, and super-users hosted locally.  GuideIT uses Office365 exclusively, yet I still have thick-client Outlook on my PC and the OWA application on both my iPhone and Windows tablet.  That wasn’t the case not all that long ago and I think we take that for granted.

Q:  And you think cloud computing is growing up?

A:  Well it’s still in grade school, but yes, absolutely.  Let’s look at what’s happened in just a few short years, specifically with market leaders such as Amazon, Microsoft and Google.  We’ve gone from an environment of apprehension, with organizations often limiting use of these services for development and test environments, to leading application vendors running mission critical applications in the cloud, and being comfortable with both the performance/availability and the security of those environments.  On top of that, these industry leaders are, if you’ll excuse the comparison, literally at war with each other to drive down cost, directly benefiting their customers.  We’re a good ways away from a large organization being able to run 100% in the cloud, but the shift is on.  CIOs have to ensure they are challenging the legacy model and positioning their organizations to benefit from both the performance and flexibility of these environments, but just as importantly the cost. 

Q:  How do they do that?

A:  A good place to start is an end to end review of their infrastructure and application strategy to produce a roadmap that positions their organization to ride this wave, not be left behind carrying the burden of legacy investments.  Timing is critical; the pace of change in IT today is far more rapid than the old mainframe or client-server days and this process takes planning.  That said, this analysis should not be just about a multi-year road-map.  The right partner should be able to make recommendations around tactical initiatives, the so-called “low-hanging fruit” that will generate immediate cost savings, and help fund your future initiatives.  Second, is to be darn sure you don’t lock yourself into long-term contracts with hosting providers, or if you do ensure you retain contractual flexibility that goes well beyond contract bench-marking.  You have to protect yourself from the contracting model where vendors present your pricing in an “as a service” model, but are really just depreciating capital purchased on your behalf in the background.  You might meet your short-term financial objectives, but I promise in short order you’ll realize you left money on the table.  At Guide IT we’re so confident in what we can deliver that if a CIO engages GuideIT for an enterprise assessment, and isn’t happy with the results, they don’t pay.

Q:  You’ve spent half your career in healthcare – how do you see these trends you’ve discussed affecting the continuity of care model?

A:  Well we could chat about just that topic for quite some time.  My “ah-ha moments” tend to come from personal experience.  I’ll give you two examples.  Recently I started wearing a FitBit that syncs with my iPhone.  On a good day, the device validates my daily physical activity; but to be honest, too often reminds me that I need to do a better job of making exercise a mandatory part of my day.  Today that data is only on my smartphone – tomorrow it could be with my family physician, in my PHR, or even with my insurer to validate wellness premium discounts.  The “internet of things” is here and you just know these activity devices are the tip of the iceberg.  Your infrastructure and strategy roadmap have to be flexible enough to meet today’s requirements, but also support what we all know is coming, and in many cases what we don’t know is coming.  Today’s environment reminds me of the early thin client days that placed a premium on adopting a services-oriented architecture.

Second is my experience with the DNA sequencing service 23andme.com.  I found my health and ancestry data fascinating, and though the FDA has temporarily shut down the health data portion of the service, there will come a day very soon that we’ll view the practice of medicine without genome data as akin to the days without antibiotics and MRIs.  Just as they are doing with the EMR Adoption Model, CIOs should ask themselves where they’re at on the Healthcare Analytics Adoption Model and what their plan is to move to the advanced stages - the ones beyond reimbursement.  A customer of mine remarked the other day that what’s critical about the approach to analytics is not “what is the answer?” but rather “what is the question?”  And he’s right.

Voyage Linux: Dialog Error with Apt

This can happen on other Linux distributions, however, in this case, I found it on Voyage Linux, which is a Linux distribution for embedded hardware.

The Error

Here we are dealing with an annoyance whenever you use apt-get or aptitude.

debconf: unable to initialize frontend: Dialog<br />

debconf: (No usable dialog-like program is installed, so the dialog-based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76, &lt;&gt; line 1.)<br />

debconf: falling back to frontend: Readline

The Fix

Simply install dialog, which is the package it is not finding.  This will no longer need the failback to readline.

# apt-get install dialog

Once the dialog package has been installed the issue will no longer occur on subsequent runs of apt-get or aptitude.

Voyage Linux: Locale Error with Apt

Voyage Linux is an embedded linux distribution.  I use it on some ALIX boards I have lying around, it is very stripped down, and as such there are a few annoyances which we have to fix.

The Error

This issue happens when attempting to install/upgrade packages using apt-get or aptitude.

perl: warning: Setting locale failed.<br />

perl: warning: Please check that your locale settings:<br />

    LANGUAGE = (unset),<br />

    LC_ALL = (unset),<br />

    LANG = &quot;en_US.utf8&quot;<br />

are supported and installed on your system.<br />

perl: warning: Falling back to the standard locale (&quot;C&quot;).

The Fix

We simply need to set the locales to use en_US.UTF-8 or whichever locale is correct for your situation.

# locale-gen --purge en_US.UTF-8<br />

# echo &quot;LANG=en_US.UTF-8&quot; &gt;&gt; /etc/default/locale<br />

# update-locale

Now subsequent runs of apt-get or aptitude will no longer generate the error.

Adventures in ZFS: Splitting a Zpool
SQL Developer Crash on Fedora 20

I ran into a painful issue on Fedora 20 with SQL Developer.  Basically every time it was launched via the shortcut it would go through loading, and then disappear.

Manual Invocation of SQL Developer

When launching it via the script itself it gives us a little more information.

$ /opt/sqldeveloper/sqldeveloper.sh</p>

<p>Oracle SQL Developer<br />

Copyright (c) 1997, 2013, Oracle and/or its affiliates. All rights reserved.</p>

<p>&amp;nbsp;</p>

<p>LOAD TIME : 279#<br />

# A fatal error has been detected by the Java Runtime Environment:<br />

#<br />

# SIGSEGV (0xb) at pc=0x00000038a1e64910, pid=12726, tid=140449865832192<br />

#<br />

# JRE version: Java(TM) SE Runtime Environment (7.0_40-b43) (build 1.7.0_40-b43)<br />

# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.0-b56 mixed mode linux-amd64 compressed oops)<br />

# Problematic frame:<br />

# C 0x00000038a1e64910<br />

#<br />

# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try &quot;ulimit -c unlimited&quot; before starting Java again<br />

#<br />

# An error report file with more information is saved as:<br />

# /opt/sqldeveloper/sqldeveloper/bin/hs_err_pid12726.log<br />

[thread 140449881597696 also had an error]<br />

#<br />

# If you would like to submit a bug report, please visit:<br />

# http://bugreport.sun.com/bugreport/crash.jsp<br />

#<br />

/opt/sqldeveloper/sqldeveloper/bin/../../ide/bin/launcher.sh: line 611: 12726 Aborted (core dumped) ${JAVA} &quot;${APP_VM_OPTS[@]}&quot; ${APP_ENV_VARS} -classpath ${APP_CLASSPATH} ${APP_MAIN_CLASS} &quot;${APP_APP_OPTS[@]}&quot;

I also noticed, that while executing as root it worked.  However that clearly isn’t the “solution”

Fixing the Problem

Here we need to remove the GNOME_DESKTOP_SESSION_ID as part of the script.

$ cat /opt/sqldeveloper/sqldeveloper.sh<br />

#!/bin/bash<br />

unset -v GNOME_DESKTOP_SESSION_ID<br />

cd &quot;`dirname $0`&quot;/sqldeveloper/bin &amp;&amp; bash sqldeveloper $*

Once this was completed, SQL Developer launched clean for me.

 

No post found
Linux KVM: Bridging a Bond on CentOS 6.5

Today we are going to hop back into the KVM fray, and take a  look at using CentOS as a hypervisor., and configuring very resilient network connections to support our guests.  Of course these instructions should be valid on Red Hat Linux and Oracle Linux as well, though there is a little more to be done around getting access to the repos on those distributions…

Enable Bonding

I am assuming this is a first build for you, so this step might not be applicable, but it won’t hurt anything.

# modprobe --first-time bonding

Configure the Physical Interfaces

In our example we will be using two physical interfaces, eth0 and eth1.  Here are the interface configuration files.

# cat /etc/sysconfig/network-scripts/ifcfg-eth0<br />

DEVICE=eth0<br />

HWADDR=XX:XX:XX:XX:XX:XX<br />

ONBOOT=yes<br />

NM_CONTROLLED=no<br />

BOOTPROTO=none<br />

MASTER=bond0<br />

SLAVE=yes<br />

USERCTL=no

# cat /etc/sysconfig/network-scripts/ifcfg-eth1<br />

DEVICE=eth1<br />

HWADDR=XX:XX:XX:XX:XX:XX<br />

ONBOOT=yes<br />

NM_CONTROLLED=no<br />

BOOTPROTO=none<br />

MASTER=bond0<br />

SLAVE=yes<br />

USERCTL=no

Configure the Bonded Interface

Here we are going to bond the interfaces together, which will increase the resiliency of the interface.

# cat /etc/sysconfig/network-scripts/ifcfg-bond0<br />

DEVICE=bond0<br />

ONBOOT=yes<br />

NM_CONTROLLED=no<br />

BOOTPROTO=none<br />

USERCTL=no<br />

BONDING_OPTS=&quot;mode=1 miimon=100&quot;<br />

BRIDGE=br0

Configure the Bridge

The final step is to configure the bridge itself, which is what KVM creates the vNIC on to allow for guest network communication.

# cat /etc/sysconfig/network-scripts/ifcfg-br0<br />

DEVICE=br0<br />

TYPE=Bridge<br />

ONBOOT=yes<br />

NM_CONTROLLED=no<br />

BOOTPROTO=none<br />

USERCTL=no<br />

IPADDR=192.168.1.10<br />

NETMASK=255.255.255.0<br />

GATEWAY=192.168.1.1<br />

DELAY=0

Service Restart

Finally the easy part.  Now one snag I ran into.  If you created IP addresses on bond0, then you will have a tough time getting rid of that with a service restart alone.  I found it was easier to reboot the box itself.

# service network restart

BlackBerry OS 10: Caldav Setup with Zimbra

I have owned my Blackberry Z10, going on a year now, and I have absolutely loved it.  However, one of the issues that I have fought was in integrating it with my Zimbra Installation.  Email was easy, the IMAP protocol sorted that out easily enough… However, calendars turned out to be more of a challenge than I expected.

Here is the versions that I validated these steps on.

  • Blackberry Z10 with 10.2.1.2977
  • Zimbra Collaboration Server 8.5.0

Here is how to get it done.

Figure 1-1 – System Settings

Figure 1-1 gets us started, I am assuming that you know how to find the settings on BB10, but once there go into the Accounts section.

Figure 1-2 – Accounts

Figure 1-2 is a listing of all of the existing accounts, with mine obfuscated, of course, however, we are going to be adding another one, so we select Add Account.

Figure 1-3 – Add Accounts

You can see above in Figure 1-3, that we don’t use the “Subscribed Calendar” selection, but instead go to Advanced.  When I used Subscribed Calendar, it was never able to successfully perform a synchronization.

Figure 1-4 – Advanced Setup

In Figure 1-4 we are selecting CalDAV as the type of Account to use.  Also a little footnote, I was unable to get CardDAV working. I will provide an update or another article if I find a way around this.

Figure 1-5 – CalDAV Settings

In Figure 1-5 we are populating all of the information needed to make a connection.  Please keep in mind, that we need to use user@domain.tld for the username, and the Server Address should be in the following format:  https://zimbra.domain.tld/dav/user@domain.tld/Calendar. The important bits here are (1) https – I suspect http works as well, but I did not validate (2) username – the username is a component of the URI, this makes it a little tough to implement for less sophisticated users (3) Calendar – the default calendar for all Zimbra users is named “Calendar” – with a capital “C” not sure if you can have calendars with other names, but this is the name needed for most situations.

Now set your password and sync interval and you should be ready to go.

IT Trends, Change and The Future…A Conversation With an Industry Veteran

As a technology and healthcare centric marketing firm, we at illumeture work with emerging companies in achieving more right conversations with right people. Part of that work comes in learning and sharing the thought leadership and subject matter expertise of our clients with the right audiences. Mark Johnson is Vice President with GuideIT responsible for Account Operations and Delivery.  Prior to joining GuideIT, Mark spent 23 years with Perot Systems and Dell, the last 6 years leading business development teams tasked with solutioning, negotiating and closing large healthcare IT services contracts.  We sat down with Mark for his perspective on what CIOs should be thinking about today. 

Q:  You believe that a number of fundamental changes are affecting how CIOs should be thinking about both how they consume and deliver IT services – can you explain?

A:  Sure.  At a high level, start with the growing shift from sole-source IT services providers to more of a multi-sourcing model.  A model in which CIOs ensure they have the flexibility to choose among a variety of application and services providers, while maintaining the ability to retain those functions that make sense for a strategic or financial reason.  The old sourcing model was often binary, you either retained the service or gave it to your IT outsourcing vendor.  Today’s environment demands a third option:  the multi-source approach, or what we at GuideIT call “Flex-Sourcing”.

Q:  What’s driving that demand?

A:  A number of trends, some of which are industry specific.  But two that cross all industries are the proliferation of Software as a Service in the market, and cloud computing moving from infancy to adolescence.

Q:  Software as a Service isn’t new.

A:  No it isn’t.  But we’re moving from early adopters like salesforce.com to an environment where new application providers are developing exclusively for the cloud, and existing providers are executing to a roadmap to get there.  And not just business applications; hosted PBX is a great example of what used to be local infrastructure moving to a SaaS model in the cloud.  Our service desk telephony is hosted by one of our partners – OneSource, and we’re working closely with them to bring hosted PBX to our customers.  E-mail is another great example.  In the past I’d tee up email as a service to customers, usually either Gmail or Office365, but rarely got traction.  Now you see organizations looking hard at either a 100% SaaS approach for email, or in the case of Exchange, a hybrid model where organizations classify their users, with less frequent users in the cloud, and super-users hosted locally.  GuideIT uses Office365 exclusively, yet I still have thick-client Outlook on my PC and the OWA application on both my iPhone and Windows tablet.  That wasn’t the case not all that long ago and I think we take that for granted.

Q:  And you think cloud computing is growing up?

A:  Well it’s still in grade school, but yes, absolutely.  Let’s look at what’s happened in just a few short years, specifically with market leaders such as Amazon, Microsoft and Google.  We’ve gone from an environment of apprehension, with organizations often limiting use of these services for development and test environments, to leading application vendors running mission critical applications in the cloud, and being comfortable with both the performance/availability and the security of those environments.  On top of that, these industry leaders are, if you’ll excuse the comparison, literally at war with each other to drive down cost, directly benefiting their customers.  We’re a good ways away from a large organization being able to run 100% in the cloud, but the shift is on.  CIOs have to ensure they are challenging the legacy model and positioning their organizations to benefit from both the performance and flexibility of these environments, but just as importantly the cost. 

Q:  How do they do that?

A:  A good place to start is an end to end review of their infrastructure and application strategy to produce a roadmap that positions their organization to ride this wave, not be left behind carrying the burden of legacy investments.  Timing is critical; the pace of change in IT today is far more rapid than the old mainframe or client-server days and this process takes planning.  That said, this analysis should not be just about a multi-year road-map.  The right partner should be able to make recommendations around tactical initiatives, the so-called “low-hanging fruit” that will generate immediate cost savings, and help fund your future initiatives.  Second, is to be darn sure you don’t lock yourself into long-term contracts with hosting providers, or if you do ensure you retain contractual flexibility that goes well beyond contract bench-marking.  You have to protect yourself from the contracting model where vendors present your pricing in an “as a service” model, but are really just depreciating capital purchased on your behalf in the background.  You might meet your short-term financial objectives, but I promise in short order you’ll realize you left money on the table.  At Guide IT we’re so confident in what we can deliver that if a CIO engages GuideIT for an enterprise assessment, and isn’t happy with the results, they don’t pay.

Q:  You’ve spent half your career in healthcare – how do you see these trends you’ve discussed affecting the continuity of care model?

A:  Well we could chat about just that topic for quite some time.  My “ah-ha moments” tend to come from personal experience.  I’ll give you two examples.  Recently I started wearing a FitBit that syncs with my iPhone.  On a good day, the device validates my daily physical activity; but to be honest, too often reminds me that I need to do a better job of making exercise a mandatory part of my day.  Today that data is only on my smartphone – tomorrow it could be with my family physician, in my PHR, or even with my insurer to validate wellness premium discounts.  The “internet of things” is here and you just know these activity devices are the tip of the iceberg.  Your infrastructure and strategy roadmap have to be flexible enough to meet today’s requirements, but also support what we all know is coming, and in many cases what we don’t know is coming.  Today’s environment reminds me of the early thin client days that placed a premium on adopting a services-oriented architecture.

Second is my experience with the DNA sequencing service 23andme.com.  I found my health and ancestry data fascinating, and though the FDA has temporarily shut down the health data portion of the service, there will come a day very soon that we’ll view the practice of medicine without genome data as akin to the days without antibiotics and MRIs.  Just as they are doing with the EMR Adoption Model, CIOs should ask themselves where they’re at on the Healthcare Analytics Adoption Model and what their plan is to move to the advanced stages - the ones beyond reimbursement.  A customer of mine remarked the other day that what’s critical about the approach to analytics is not “what is the answer?” but rather “what is the question?”  And he’s right.

Voyage Linux: Dialog Error with Apt

This can happen on other Linux distributions, however, in this case, I found it on Voyage Linux, which is a Linux distribution for embedded hardware.

The Error

Here we are dealing with an annoyance whenever you use apt-get or aptitude.

debconf: unable to initialize frontend: Dialog<br />

debconf: (No usable dialog-like program is installed, so the dialog-based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76, &lt;&gt; line 1.)<br />

debconf: falling back to frontend: Readline

The Fix

Simply install dialog, which is the package it is not finding.  This will no longer need the failback to readline.

# apt-get install dialog

Once the dialog package has been installed the issue will no longer occur on subsequent runs of apt-get or aptitude.

Voyage Linux: Locale Error with Apt

Voyage Linux is an embedded linux distribution.  I use it on some ALIX boards I have lying around, it is very stripped down, and as such there are a few annoyances which we have to fix.

The Error

This issue happens when attempting to install/upgrade packages using apt-get or aptitude.

perl: warning: Setting locale failed.<br />

perl: warning: Please check that your locale settings:<br />

    LANGUAGE = (unset),<br />

    LC_ALL = (unset),<br />

    LANG = &quot;en_US.utf8&quot;<br />

are supported and installed on your system.<br />

perl: warning: Falling back to the standard locale (&quot;C&quot;).

The Fix

We simply need to set the locales to use en_US.UTF-8 or whichever locale is correct for your situation.

# locale-gen --purge en_US.UTF-8<br />

# echo &quot;LANG=en_US.UTF-8&quot; &gt;&gt; /etc/default/locale<br />

# update-locale

Now subsequent runs of apt-get or aptitude will no longer generate the error.

Adventures in ZFS: Splitting a Zpool
SQL Developer Crash on Fedora 20

I ran into a painful issue on Fedora 20 with SQL Developer.  Basically every time it was launched via the shortcut it would go through loading, and then disappear.

Manual Invocation of SQL Developer

When launching it via the script itself it gives us a little more information.

$ /opt/sqldeveloper/sqldeveloper.sh</p>

<p>Oracle SQL Developer<br />

Copyright (c) 1997, 2013, Oracle and/or its affiliates. All rights reserved.</p>

<p>&amp;nbsp;</p>

<p>LOAD TIME : 279#<br />

# A fatal error has been detected by the Java Runtime Environment:<br />

#<br />

# SIGSEGV (0xb) at pc=0x00000038a1e64910, pid=12726, tid=140449865832192<br />

#<br />

# JRE version: Java(TM) SE Runtime Environment (7.0_40-b43) (build 1.7.0_40-b43)<br />

# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.0-b56 mixed mode linux-amd64 compressed oops)<br />

# Problematic frame:<br />

# C 0x00000038a1e64910<br />

#<br />

# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try &quot;ulimit -c unlimited&quot; before starting Java again<br />

#<br />

# An error report file with more information is saved as:<br />

# /opt/sqldeveloper/sqldeveloper/bin/hs_err_pid12726.log<br />

[thread 140449881597696 also had an error]<br />

#<br />

# If you would like to submit a bug report, please visit:<br />

# http://bugreport.sun.com/bugreport/crash.jsp<br />

#<br />

/opt/sqldeveloper/sqldeveloper/bin/../../ide/bin/launcher.sh: line 611: 12726 Aborted (core dumped) ${JAVA} &quot;${APP_VM_OPTS[@]}&quot; ${APP_ENV_VARS} -classpath ${APP_CLASSPATH} ${APP_MAIN_CLASS} &quot;${APP_APP_OPTS[@]}&quot;

I also noticed, that while executing as root it worked.  However that clearly isn’t the “solution”

Fixing the Problem

Here we need to remove the GNOME_DESKTOP_SESSION_ID as part of the script.

$ cat /opt/sqldeveloper/sqldeveloper.sh<br />

#!/bin/bash<br />

unset -v GNOME_DESKTOP_SESSION_ID<br />

cd &quot;`dirname $0`&quot;/sqldeveloper/bin &amp;&amp; bash sqldeveloper $*

Once this was completed, SQL Developer launched clean for me.

 

No post found
Linux KVM: Bridging a Bond on CentOS 6.5

Today we are going to hop back into the KVM fray, and take a  look at using CentOS as a hypervisor., and configuring very resilient network connections to support our guests.  Of course these instructions should be valid on Red Hat Linux and Oracle Linux as well, though there is a little more to be done around getting access to the repos on those distributions…

Enable Bonding

I am assuming this is a first build for you, so this step might not be applicable, but it won’t hurt anything.

# modprobe --first-time bonding

Configure the Physical Interfaces

In our example we will be using two physical interfaces, eth0 and eth1.  Here are the interface configuration files.

# cat /etc/sysconfig/network-scripts/ifcfg-eth0<br />

DEVICE=eth0<br />

HWADDR=XX:XX:XX:XX:XX:XX<br />

ONBOOT=yes<br />

NM_CONTROLLED=no<br />

BOOTPROTO=none<br />

MASTER=bond0<br />

SLAVE=yes<br />

USERCTL=no

# cat /etc/sysconfig/network-scripts/ifcfg-eth1<br />

DEVICE=eth1<br />

HWADDR=XX:XX:XX:XX:XX:XX<br />

ONBOOT=yes<br />

NM_CONTROLLED=no<br />

BOOTPROTO=none<br />

MASTER=bond0<br />

SLAVE=yes<br />

USERCTL=no

Configure the Bonded Interface

Here we are going to bond the interfaces together, which will increase the resiliency of the interface.

# cat /etc/sysconfig/network-scripts/ifcfg-bond0<br />

DEVICE=bond0<br />

ONBOOT=yes<br />

NM_CONTROLLED=no<br />

BOOTPROTO=none<br />

USERCTL=no<br />

BONDING_OPTS=&quot;mode=1 miimon=100&quot;<br />

BRIDGE=br0

Configure the Bridge

The final step is to configure the bridge itself, which is what KVM creates the vNIC on to allow for guest network communication.

# cat /etc/sysconfig/network-scripts/ifcfg-br0<br />

DEVICE=br0<br />

TYPE=Bridge<br />

ONBOOT=yes<br />

NM_CONTROLLED=no<br />

BOOTPROTO=none<br />

USERCTL=no<br />

IPADDR=192.168.1.10<br />

NETMASK=255.255.255.0<br />

GATEWAY=192.168.1.1<br />

DELAY=0

Service Restart

Finally the easy part.  Now one snag I ran into.  If you created IP addresses on bond0, then you will have a tough time getting rid of that with a service restart alone.  I found it was easier to reboot the box itself.

# service network restart

BlackBerry OS 10: Caldav Setup with Zimbra

I have owned my Blackberry Z10, going on a year now, and I have absolutely loved it.  However, one of the issues that I have fought was in integrating it with my Zimbra Installation.  Email was easy, the IMAP protocol sorted that out easily enough… However, calendars turned out to be more of a challenge than I expected.

Here is the versions that I validated these steps on.

  • Blackberry Z10 with 10.2.1.2977
  • Zimbra Collaboration Server 8.5.0

Here is how to get it done.

Figure 1-1 – System Settings

Figure 1-1 gets us started, I am assuming that you know how to find the settings on BB10, but once there go into the Accounts section.

Figure 1-2 – Accounts

Figure 1-2 is a listing of all of the existing accounts, with mine obfuscated, of course, however, we are going to be adding another one, so we select Add Account.

Figure 1-3 – Add Accounts

You can see above in Figure 1-3, that we don’t use the “Subscribed Calendar” selection, but instead go to Advanced.  When I used Subscribed Calendar, it was never able to successfully perform a synchronization.

Figure 1-4 – Advanced Setup

In Figure 1-4 we are selecting CalDAV as the type of Account to use.  Also a little footnote, I was unable to get CardDAV working. I will provide an update or another article if I find a way around this.

Figure 1-5 – CalDAV Settings

In Figure 1-5 we are populating all of the information needed to make a connection.  Please keep in mind, that we need to use user@domain.tld for the username, and the Server Address should be in the following format:  https://zimbra.domain.tld/dav/user@domain.tld/Calendar. The important bits here are (1) https – I suspect http works as well, but I did not validate (2) username – the username is a component of the URI, this makes it a little tough to implement for less sophisticated users (3) Calendar – the default calendar for all Zimbra users is named “Calendar” – with a capital “C” not sure if you can have calendars with other names, but this is the name needed for most situations.

Now set your password and sync interval and you should be ready to go.

IT Trends, Change and The Future…A Conversation With an Industry Veteran

As a technology and healthcare centric marketing firm, we at illumeture work with emerging companies in achieving more right conversations with right people. Part of that work comes in learning and sharing the thought leadership and subject matter expertise of our clients with the right audiences. Mark Johnson is Vice President with GuideIT responsible for Account Operations and Delivery.  Prior to joining GuideIT, Mark spent 23 years with Perot Systems and Dell, the last 6 years leading business development teams tasked with solutioning, negotiating and closing large healthcare IT services contracts.  We sat down with Mark for his perspective on what CIOs should be thinking about today. 

Q:  You believe that a number of fundamental changes are affecting how CIOs should be thinking about both how they consume and deliver IT services – can you explain?

A:  Sure.  At a high level, start with the growing shift from sole-source IT services providers to more of a multi-sourcing model.  A model in which CIOs ensure they have the flexibility to choose among a variety of application and services providers, while maintaining the ability to retain those functions that make sense for a strategic or financial reason.  The old sourcing model was often binary, you either retained the service or gave it to your IT outsourcing vendor.  Today’s environment demands a third option:  the multi-source approach, or what we at GuideIT call “Flex-Sourcing”.

Q:  What’s driving that demand?

A:  A number of trends, some of which are industry specific.  But two that cross all industries are the proliferation of Software as a Service in the market, and cloud computing moving from infancy to adolescence.

Q:  Software as a Service isn’t new.

A:  No it isn’t.  But we’re moving from early adopters like salesforce.com to an environment where new application providers are developing exclusively for the cloud, and existing providers are executing to a roadmap to get there.  And not just business applications; hosted PBX is a great example of what used to be local infrastructure moving to a SaaS model in the cloud.  Our service desk telephony is hosted by one of our partners – OneSource, and we’re working closely with them to bring hosted PBX to our customers.  E-mail is another great example.  In the past I’d tee up email as a service to customers, usually either Gmail or Office365, but rarely got traction.  Now you see organizations looking hard at either a 100% SaaS approach for email, or in the case of Exchange, a hybrid model where organizations classify their users, with less frequent users in the cloud, and super-users hosted locally.  GuideIT uses Office365 exclusively, yet I still have thick-client Outlook on my PC and the OWA application on both my iPhone and Windows tablet.  That wasn’t the case not all that long ago and I think we take that for granted.

Q:  And you think cloud computing is growing up?

A:  Well it’s still in grade school, but yes, absolutely.  Let’s look at what’s happened in just a few short years, specifically with market leaders such as Amazon, Microsoft and Google.  We’ve gone from an environment of apprehension, with organizations often limiting use of these services for development and test environments, to leading application vendors running mission critical applications in the cloud, and being comfortable with both the performance/availability and the security of those environments.  On top of that, these industry leaders are, if you’ll excuse the comparison, literally at war with each other to drive down cost, directly benefiting their customers.  We’re a good ways away from a large organization being able to run 100% in the cloud, but the shift is on.  CIOs have to ensure they are challenging the legacy model and positioning their organizations to benefit from both the performance and flexibility of these environments, but just as importantly the cost. 

Q:  How do they do that?

A:  A good place to start is an end to end review of their infrastructure and application strategy to produce a roadmap that positions their organization to ride this wave, not be left behind carrying the burden of legacy investments.  Timing is critical; the pace of change in IT today is far more rapid than the old mainframe or client-server days and this process takes planning.  That said, this analysis should not be just about a multi-year road-map.  The right partner should be able to make recommendations around tactical initiatives, the so-called “low-hanging fruit” that will generate immediate cost savings, and help fund your future initiatives.  Second, is to be darn sure you don’t lock yourself into long-term contracts with hosting providers, or if you do ensure you retain contractual flexibility that goes well beyond contract bench-marking.  You have to protect yourself from the contracting model where vendors present your pricing in an “as a service” model, but are really just depreciating capital purchased on your behalf in the background.  You might meet your short-term financial objectives, but I promise in short order you’ll realize you left money on the table.  At Guide IT we’re so confident in what we can deliver that if a CIO engages GuideIT for an enterprise assessment, and isn’t happy with the results, they don’t pay.

Q:  You’ve spent half your career in healthcare – how do you see these trends you’ve discussed affecting the continuity of care model?

A:  Well we could chat about just that topic for quite some time.  My “ah-ha moments” tend to come from personal experience.  I’ll give you two examples.  Recently I started wearing a FitBit that syncs with my iPhone.  On a good day, the device validates my daily physical activity; but to be honest, too often reminds me that I need to do a better job of making exercise a mandatory part of my day.  Today that data is only on my smartphone – tomorrow it could be with my family physician, in my PHR, or even with my insurer to validate wellness premium discounts.  The “internet of things” is here and you just know these activity devices are the tip of the iceberg.  Your infrastructure and strategy roadmap have to be flexible enough to meet today’s requirements, but also support what we all know is coming, and in many cases what we don’t know is coming.  Today’s environment reminds me of the early thin client days that placed a premium on adopting a services-oriented architecture.

Second is my experience with the DNA sequencing service 23andme.com.  I found my health and ancestry data fascinating, and though the FDA has temporarily shut down the health data portion of the service, there will come a day very soon that we’ll view the practice of medicine without genome data as akin to the days without antibiotics and MRIs.  Just as they are doing with the EMR Adoption Model, CIOs should ask themselves where they’re at on the Healthcare Analytics Adoption Model and what their plan is to move to the advanced stages - the ones beyond reimbursement.  A customer of mine remarked the other day that what’s critical about the approach to analytics is not “what is the answer?” but rather “what is the question?”  And he’s right.

Voyage Linux: Dialog Error with Apt

This can happen on other Linux distributions, however, in this case, I found it on Voyage Linux, which is a Linux distribution for embedded hardware.

The Error

Here we are dealing with an annoyance whenever you use apt-get or aptitude.

debconf: unable to initialize frontend: Dialog<br />

debconf: (No usable dialog-like program is installed, so the dialog-based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76, &lt;&gt; line 1.)<br />

debconf: falling back to frontend: Readline

The Fix

Simply install dialog, which is the package it is not finding.  This will no longer need the failback to readline.

# apt-get install dialog

Once the dialog package has been installed the issue will no longer occur on subsequent runs of apt-get or aptitude.

Voyage Linux: Locale Error with Apt

Voyage Linux is an embedded linux distribution.  I use it on some ALIX boards I have lying around, it is very stripped down, and as such there are a few annoyances which we have to fix.

The Error

This issue happens when attempting to install/upgrade packages using apt-get or aptitude.

perl: warning: Setting locale failed.<br />

perl: warning: Please check that your locale settings:<br />

    LANGUAGE = (unset),<br />

    LC_ALL = (unset),<br />

    LANG = &quot;en_US.utf8&quot;<br />

are supported and installed on your system.<br />

perl: warning: Falling back to the standard locale (&quot;C&quot;).

The Fix

We simply need to set the locales to use en_US.UTF-8 or whichever locale is correct for your situation.

# locale-gen --purge en_US.UTF-8<br />

# echo &quot;LANG=en_US.UTF-8&quot; &gt;&gt; /etc/default/locale<br />

# update-locale

Now subsequent runs of apt-get or aptitude will no longer generate the error.

Adventures in ZFS: Splitting a Zpool
SQL Developer Crash on Fedora 20

I ran into a painful issue on Fedora 20 with SQL Developer.  Basically every time it was launched via the shortcut it would go through loading, and then disappear.

Manual Invocation of SQL Developer

When launching it via the script itself it gives us a little more information.

$ /opt/sqldeveloper/sqldeveloper.sh</p>

<p>Oracle SQL Developer<br />

Copyright (c) 1997, 2013, Oracle and/or its affiliates. All rights reserved.</p>

<p>&amp;nbsp;</p>

<p>LOAD TIME : 279#<br />

# A fatal error has been detected by the Java Runtime Environment:<br />

#<br />

# SIGSEGV (0xb) at pc=0x00000038a1e64910, pid=12726, tid=140449865832192<br />

#<br />

# JRE version: Java(TM) SE Runtime Environment (7.0_40-b43) (build 1.7.0_40-b43)<br />

# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.0-b56 mixed mode linux-amd64 compressed oops)<br />

# Problematic frame:<br />

# C 0x00000038a1e64910<br />

#<br />

# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try &quot;ulimit -c unlimited&quot; before starting Java again<br />

#<br />

# An error report file with more information is saved as:<br />

# /opt/sqldeveloper/sqldeveloper/bin/hs_err_pid12726.log<br />

[thread 140449881597696 also had an error]<br />

#<br />

# If you would like to submit a bug report, please visit:<br />

# http://bugreport.sun.com/bugreport/crash.jsp<br />

#<br />

/opt/sqldeveloper/sqldeveloper/bin/../../ide/bin/launcher.sh: line 611: 12726 Aborted (core dumped) ${JAVA} &quot;${APP_VM_OPTS[@]}&quot; ${APP_ENV_VARS} -classpath ${APP_CLASSPATH} ${APP_MAIN_CLASS} &quot;${APP_APP_OPTS[@]}&quot;

I also noticed, that while executing as root it worked.  However that clearly isn’t the “solution”

Fixing the Problem

Here we need to remove the GNOME_DESKTOP_SESSION_ID as part of the script.

$ cat /opt/sqldeveloper/sqldeveloper.sh<br />

#!/bin/bash<br />

unset -v GNOME_DESKTOP_SESSION_ID<br />

cd &quot;`dirname $0`&quot;/sqldeveloper/bin &amp;&amp; bash sqldeveloper $*

Once this was completed, SQL Developer launched clean for me.

 

No post found