Adv DB: Web DBMS Tools

Developers need tools to design web-DBMS interfaces for dynamic use of their site for either e-commerce (Amazon storefront), decision making (National Oceanographic and Atmospheric Administration weather forecast products), or forgather information (Survey Monkey), etc.  ADO.NET and Fusion Middleware are two of many tools and middleware that can be used to develop web-to-database interaction (MUSE, 2015).

ADO.NET (Connolly & Begg, 2014)

Microsoft’s approach to a web-centric middleware for the web-database interface, which provides compatibility with .NET class library, support to XML (used excessively as an industry standard), and connection/disconnection data access.  It has two tiers: dataset (data table collection, XML) and .NET Framework Data Provider (connection, command, data reader, data adapter, for the database).

Pros: Built on standards to allow for non-Microsoft products to use it.  Automatically creates XML interfaces for the application to be turned into a Web Operable Service.  Even the .NET classes conform to XML and other standards.  Other development tools for further expanding the GUI set can be added and bound to the Web Service.

Cons: According to the Data Developer Center website (2010),  with connected data access, you must explicitly manage all database resources, and not doing so can cause resource mismanagement (connections are never freed up).  Other functions in certain classes are missing, like mapping to table-valued functions in the Entity Framework.

Fusion Middleware (Connolly & Begg, 2014):

Oracle’s approach to a web-centric middleware for the web-database interface, which provides development tools, business intelligence, content management, etc.  It has three tiers: Web (using Oracle web cache and HTTP Server), Middle Tier (apps, security services, web logic servers, other remote servers, etc.), and data (the database).

Pros: Scalable. It is based on a Java Platform (full Java EE 6 implementation).  Allows Apache modules like those that route HTTP Requests, for store procedures on a database server, for transparent single sign-on, SHTTP, etc. Their Business Intelligence function allows you to extract and analyze data to create reports and charts (statically or dynamically) for decision analysis.

Cons: The complexity of their system along with their new approach creates a steep learning curve, and requires skilled developers.

The best approach for me was Microsoft: If you want to connect to many other Microsoft applications, this is one route to consider.  It has a nice learning curve (from personal experience).  Another aspect, was when I was building apps for the Library at the University of Oklahoma, the DBAs and I didn’t really like the grid view basic functionalities, so we exploited the aforementioned pro of interfacing with third-party codes, to create more interactive table view of our data.  What is also nice is that our data was on an Oracle database, and all we had to do was switch the pointer from SQL to Oracle, without needed to change the GUI code.


Plagiarism: A word

The following article found on, talks about abuses that can lead to various forms of plagiarism.  eContent Pro (2019) is a really great article showcasing that there is more than one way to plagiarise.  However, they did not provide examples to showcase each case, nor explained the nuance in case 2 all that well (eContent Pro, 2019):

  1. Self-plagiarism
  2. Overreliance on Multiple Sources
  3. Patchwriting
  4. Overusing the same source

The following is my attempt to do just that.

Example of Self-plagiarism

If I were to use the following two paragraphs verbatim in a new paper or as a book chapter … even though these are my words from Hernandez (2017a), it is considered self-plagiarism.  It is good to recycle your work cited page, it is not good to recycle your words, just like you would recycle plastic bottles.

Chapter 1: An Introduction to Data Analytics

Data analytics has existed before 1854. Snow (1854) had a theory on how cholera outbreaks occur, and he was able to use that theory to remove the pump handle off of a water pump, where that water pump had been contaminated in the summer of 1854. He had set out to prove that his hypothesis on how cholera epidemics originated from was correct, so he then drew his famous spot maps for the Board of Guardians of St. James’ parish in December 1854. These maps were showed in his eventual 2nd edition of his book “On the Mode of Communication of Cholera” (Brody, Rip, Vinten-Johansen, Paneth, & Rachman, 2000; Snow, 1855). As Brody et al. (2000) stated, this case was one of the first famous examples of the theory being proven by data, but the earlier usage of spot maps has existed.

However, the use of just geospatial data analytics can be quite limiting in finding a conclusive result if there is no underlying theory as to why the data is being recorded (Brody et al., 2000). Through the addition of subject matter knowledge and subject matter relationships before data analytics, context can be added to the data for which it can help yield better results (Garcia, Ferraz, & Vivacqua, 2009). In the case of Snow’s analysis, it could have been argued by anyone that the atmosphere in that region of London was causing the outbreak. However, Snow’s original hypothesis was about the transmission of cholera through water distribution systems, the data then helped support his hypothesis (Brody et al., 2000; Snow 1854). Thus, the suboptimal results generated from the outdated Edisonian-esque, which is a test-and-fail methodology, can prove to be very costly regarding Research and Development, compared to the results and insights gained from text mining and manipulation techniques (Chonde & Kumara, 2014).

Example of Overreliance on Multiple Sources

The following was taken from my Dissertation (Hernandez, 2017b).  There is definitely an overreliance on sources here. As with any dissertation, master’s thesis, or even interdisciplinary work. However, my voice still shines through. That is where the line is drawn by eContent Pro (2019): Is the author’s voice still present?

In this excerpt, it shows how I gathered multiple methodologies, from multiple sources and combined them all to form a best practice for data preprocessing. Another word for this process is called Synthesizing. Not one source had all the components, and listing which source contained which parts of the best practice methodologies was the purpose of these three paragraphs.  If my voice wasn’t present in these paragraphs, then it would be considered plagiarism.

Collecting the raw and unaltered real world data is the first step of any data or text
mining research study (Coralles et al., 2015; Gera & Goel, 2015; He et al., 2013; Hoonlor, 2011; Nassirtoussi et al., 2014). Next, preprocessing raw text data is needed, because raw text data files are unsuitable for predictive data analytics software tools like WEKA (Hoonlor, 2011; Miranda, n.d.). Barak and Modarres (2015), Miranda (n.d.), and Nassirtoussi et al. (2014) concluded that in both data and text mining, data preprocessing has the most significant impact on the research results.

Raw data can have formats that change across time, therefore converting the data into one common format for analysis is necessary for data analytics (Mandrai & Barkar, 2014). Also, the removal of HTML tags from web-based data sources allows for the removal of extraneous data points that can provide unpredictable results (Netzer et al., 2012). Finally, deciding on a strategy about how to deal with missing or defective data fields can aid in mitigating noise from the results (Barak & Modarres, 2015; Fayyad et al., 1996; Mandrai & Barskar, 2014; Netzer, 2012). Furthermore, to gain the most insights surrounding a research problem, data from multiple data
sources should be collected and integrated (Corrales et al., 2015).

Predictive data analytics tools can analyze unstructured text data after the preprocessing step. Preprocessing involves tokenization, stop word removal, and word-normalization (Hoonlor, 2011; Miranda, n.d.; Nassirtoussi et al., 2014; Nassirtoussi et al., 2015; Pletscher-Frankild et al., 2015; Thanh & Meesad, 2014). Tokenization is when a body of text is reduced to a set of units, phrases, or groups of keywords for analysis (Hoonlor, 2011; Miranda, n.d.; Nassirtoussi et al., 2014; Nassirtoussi et al., 2015; Pletscher-Frankild et al., 2015; Thanh & Meesad, 2014). For
example, the term eyewall replacement would be considered one token, rather than two words or two different tokens. Stopword removal is the removal of the words that add no value to the predictive analytics algorithm from the body of text; these words are prepositions, articles, and conjunctions (Hoonlor, 2011; Miranda, n.d.; Nassirtoussi et al., 2014; Nassirtoussi et al., 2015; Thanh & Meesad, 2014). Miranda (n.d.) stated that sometimes stop-word removals could also be context-dependent because some contextual words can yield little to no value in the analysis. For instance, meteorological forecast models in this study are considered context-dependent stopwords. Lastly, word-normalization transforms the letters into a body of text to one single case type and removes the conjugations of words (Hoonlor, 2011; Miranda, n.d.; Nassirtoussi et al., 2014; Nassirtoussi et al., 2015; Thanh & Meesad, 2014). For example, stemming the following words cooler, coolest, and colder becomes cool-, which heightens the fidelity of the results due to the reduction of dimensionalities.

Example of Pathwriting and overusing the same source

This self-created meta-post for this post, which happens to be a curation post for Service Operations KPIs and CSF. The words below have been lifted from various sections of:

Each sample Critical Success Factors (CSFs) is followed by a small number of typical Key Performance Indicators (KPIs) that support the CSF. These KPIs should not be adopted without careful consideration. Each organization should develop KPIs that are appropriate for its level of maturity, its CSFs and its particular circumstances. Achievement against KPIs should be monitored and used to identify opportunities for improvement, which should be logged in the CSI register for evaluation and possible implementation.

Service Operations: Ensures that services operate within agreed parameters, when it’s interrupted they restore services ASAP 

Request Fulfillment Management: Request Fulfillment is responsible for

  • Managing the initial contact between users and the Service Desk.
  • Managing the lifecycle of service requests from initial request through delivery of the expected results.
  • Managing the channels by which users can request and receive services via service requests.
  • Managing the process by which approvals and entitlements are defined and managed for identified service requests (future).
  • Managing the supply chain for service requests and assisting service providers in ensuring that the end-to-end ddelivery is managed according to plan.
  • Working with the Service Catalog and Service Portfolio managers to ensure that all standard service requests are appropriately defined and managed in the service catalog (future).


  • CSF Requests must be fulfilled in an efficient and timely manner that is aligned to agreed service level targets for each type of request

o    KPI The mean elapsed time for handling each type of service request

o    KPI The number and percentage of service requests completed within agreed target times

o    KPI Breakdown of service requests at each stage (e.g. logged, work in progress, closed etc.)

o    KPI Percentage of service requests closed by the service desk without reference to other levels of support (often referred to as ‘first point of contact’)

o    KPI Number and percentage of service requests resolved remotely or through automation, without the need for a visit

o    KPI Total numbers of requests (as a control measure)

o    KPI The average cost per type of service request

  • CSF Only authorized requests should be fulfilled

o    KPI Percentage of service requests fulfilled that were appropriately authorized

o    KPI Number of incidents related to security threats from request fulfilment activities

  • CSF User satisfaction must be maintained

o    KPI Level of user satisfaction with the handling of service requests (as measured in some form of satisfaction survey)

o    KPI Total number of incidents related to request fulfilment activities

o    KPI The size of current backlog of outstanding service requests.

Incident Management: Incident Management is responsible for the resolution of any incident, reported by a tool or user, which is not part of normal operations and causes or may cause a disruption to or decrease in the quality of a service.

  • CSF Resolve incidents as quickly as possible minimizing impacts to the business

o    KPI Mean elapsed time to achieve incident resolution or circumvention, broken down by impact code

o    KPI Breakdown of incidents at each stage (e.g. logged, work in progress, closed etc.)

o    KPI Percentage of incidents closed by the service desk without reference to other levels of support (often referred to as ‘first point of contact’)

o    KPI Number and percentage of incidents resolved remotely, without the need for a visit

o    KPI Number of incidents resolved without impact to the business (e.g. incident was raised by event management and resolved before it could impact the business)

  • CSF Maintain quality of IT services

o    KPI Total numbers of incidents (as a control measure)

o    KPI Size of current incident backlog for each IT service

o    KPI Number and percentage of major incidents for each IT service

  • CSF Maintain user satisfaction with IT services

o    KPI Average user/customer survey score (total and by question category)

o    KPI Percentage of satisfaction surveys answered versus total number of satisfaction surveys sent

  • CSF Increase visibility and communication of incidents to business and IT support staff

o    KPI Average number of service desk calls or other contacts from business users for incidents already reported

o    KPI Number of business user complaints or issues about the content and quality of incident communications

  • CSF Align incident management activities and priorities with those of the business

o    KPI Percentage of incidents handled within agreed response time (incident response-time targets may be specified in SLAs, for example, by impact and urgency codes)

o    KPI Average cost per incident

  • CSF Ensure that standardized methods and procedures are used for efficient and prompt response, analysis, documentation, ongoing management and reporting of incidents to maintain business confidence in IT capabilities

o    KPI Number and percentage of incidents incorrectly assigned

o    KPI Number and percentage of incidents incorrectly categorized

o    KPI Number and percentage of incidents processed per service desk agent

o    KPI Number and percentage of incidents related to changes and releases.

Problem Management: Problem Management is responsible for the activities required to

  • Diagnose the root cause of incidents.
  • Determine the resolution to related problems.
  • Perform trend analysis to identify and resolve problems before they impact the live environment.
  • Ensure that resolutions are implemented through the appropriate control procedures, especially change management and release management.

Problem Management maintains information about problems and appropriate workarounds and resolutions to help the organization reduce the number and impact of incidents over time. To do this, Problem Management has a strong interface with Knowledge Management and uses tools such as the Known Error Database.

  • CSF Minimize the impact to the business of incidents that cannot be prevented

o    KPI The number of known errors added to the KEDB

o    KPI The percentage accuracy of the KEDB (from audits of the database)

o    KPI Percentage of incidents closed by the service desk without reference to other levels of support (often referred to as ‘first point of contact’)

o    KPI Average incident resolution time for those incidents linked to problem records

  • CSF Maintain quality of IT services through elimination of recurring incidents

o    KPI Total numbers of problems (as a control measure)

o    KPI Size of current problem backlog for each IT service

o    KPI Number of repeat incidents for each IT service

  • CSF Provide overall quality and professionalism of problem handling activities to maintain business confidence in IT capabilities

o    KPI The number of major problems (opened and closed and backlog)

o    KPI The percentage of major problem reviews successfully performed

o    KPI The percentage of major problem reviews completed successfully and on time

o    KPI Number and percentage of problems incorrectly assigned

o    KPI Number and percentage of problems incorrectly categorized

o    KPI The backlog of outstanding problems and the trend (static, reducing or increasing?)

o    KPI Number and percentage of problems that exceeded their target resolution times

o    KPI Percentage of problems resolved within SLA targets (and the percentage that are not!)

o    KPI Average cost per problem.

Event Management: These processes have planning, design, and operations activity. Event Management is responsible for any aspect of Service Management that needs to be monitored or controlled and where the monitoring and controls can be automated. This includes:

  • Configuration items.
  • Environmental controls.
  • Software licensing.
  • Security.
  • Normal operational activities.

Event Management includes defining and maintaining Event Management solutions and managing events.

  • CSF Detecting all changes of state that have significance for the management of CIs and IT services

o    KPI Number and ratio of events compared with the number of incidents

o    KPI Number and percentage of each type of event per platform or application versus total number of platforms and applications underpinning live IT services (looking to identify IT services that may be at risk for lack of capability to detect their events)

  • CSF Ensuring all events are communicated to the appropriate functions that need to be informed or take further control actions

o    KPI Number and percentage of events that required human intervention and whether this was performed

o    KPI Number of incidents that occurred and percentage of these that were triggered without a corresponding event

  • CSF Providing the trigger, or entry point, for the execution of many service operation processes and operations management activities

o    KPI Number and percentage of events that required human intervention and whether this was performed

  • CSF Provide the means to compare actual operating performance and behaviour against design standards and SLAs

o    KPI Number and percentage of incidents that were resolved without impact to the business (indicates the overall effectiveness of the event management process and underpinning solutions)

o    KPI Number and percentage of events that resulted in incidents or changes

o    KPI Number and percentage of events caused by existing problems or known errors (this may result in a change to the priority of work on that problem or known error)

o    KPI Number and percentage of events indicating performance issues (for example, growth in the number of times an application exceeded its transaction thresholds over the past six months)

o    KPI Number and percentage of events indicating potential availability issues (e.g. failovers to alternative devices, or excessive workload swapping)

  • CSF Providing a basis for service assurance, reporting and service improvement

o    KPI Number and percentage of repeated or duplicated events (this will help in the tuning of the correlation engine to eliminate unnecessary event generation and can also be used to assist in the design of better event generation functionality in new services)

o    KPI Number of events/alerts generated without actual degradation of service/functionality (false positives – indication of the accuracy of the instrumentation parameters, important for CSI).

Access Management: Access Management aims to grant authorized users the right to use a service, while preventing access to non-authorized users. The Access Management processes essentially execute policies defined in [[IT Security Management |Information Security Management]]. Access Management is sometimes also referred to as ”Rights Management” or ”Identity Management”.

  • CSF Ensuring that the confidentiality, integrity and availability of services are protected in accordance with the information security policy

o    KPI Percentage of incidents that involved inappropriate security access or attempts at access to services

o    KPI Number of audit findings that discovered incorrect access settings for users that have changed roles or left the company

o    KPI Number of incidents requiring a reset of access rights

o    KPI Number of incidents caused by incorrect access settings

  • CSF Provide appropriate access to services on a timely basis that meets business needs

o    KPI Percentage of requests for access (service request, RFC etc.) that were provided within established SLAs and OLAs

  • CSF Provide timely communications about improper access or abuse of services on a timely basis

o    KPI Average duration of access-related incidents (from time of discovery to escalation).


Unconventional Hurricane Prep

We are at the height of Hurricane Season again for the United States. Although, we have hurricane preparation lists from multiple websites.  I think I like to share some unconventional items:

To do:

  1. As bad as a Hurricane is, it is a fantastic way to really meet your neighbors.
  2. Fill up your bathtubs with water. If there is no water after, you can put a bucket out of the tub in the back of your toilet so you can flush.
  3. Fill up a cup of water and put it in the freezer to freeze. Then put a coin on the top of the frozen water. If you come back from evacuating and don’t know if you lost power for a while check the cup. If the coin is frozen to the bottom of the cup you know the food defrosted and refroze when the power came back on. Throw out the food. If the coin is still on top your food is fine.
  4. Also, fill your Tupperware with water and freeze it. It can be used after the storm in coolers to keep food and drinks cold. Ice will be precious!
  5. Fill up coolers with water for drinking after the storm because the water after the storm is contaminated and may have to be boiled before use.
  6. Make sure nothing is left outside that can hit the house. Pull all outside furniture, bird feeders, etc. into the garage. Most become flying objects left outside. If you can’t bring trash cans or recycling in, let them fill with water. Make sure they are empty before the storm hits.
  7. Don’t buy hurricane snacks too early because you will eat them before it hits.
  8. Fill up all cars with gasoline or diesel prior to evacuating or staying put.  You never know when the next shipment of gas will come in nor how much more expensive it will be, because of low supply and high demand. Save your gas after the storm. If you must sight-see, use a bicycle.
  9. Close all doors in the home. Especially if you evacuate. If you lose a window, that should confine water damage.
  10. Metal Garage doors, especially the 9 to 10 Ft variety, can’t handle strong winds. they will collapse inward. Either park in front of them or back a car up against them from the inside. Don’t forget the heavy blanket between the door and the car.
  11. Always assume a downed power line may be present in standing water.
  12. If you lose power and if you have any solar-powered lights, bring some inside to light the house at night and back in the sun during the day. It saves batteries!
  13. If you’re evacuating, don’t forget to unplug any electrical items that you can, eg TV, router, desktop pc, etc. If your area loses power, there could be a surge when your power company is trying to bring folks back online. Most surge protectors will help, but it’s usually better to be safe than sorry. Also, when cutting areas back online there may be power “blips” (on then off really quick). It’s best to wait until the power stabilizes to plug stuff back in.
  14. Charge all your electronics and turn them off before the storm.  This includes computers, cellphones, etc.
  15. Make sure all your dishes and laundry are clean. It might be a week before the power comes back on. In Miami, for Hurricane Wilma, I was without power for almost a month.
  16. If you have no landline for the phone, arrange to work with someone that does. It can take weeks if not a month after a storm before all cell towers were realigned.

Do this every year prior to the storm season

  1. Check your trees for dead limbs. Service them if you can.
  2. If you have a Generator or Chain saw, service it now. Make sure it is ready to go.
  3. Check your grille. You’ll need gas or lots of wood/ charcoal for cooking. Never use a gas grill indoors.
  4.  Stock your and your family’s medical needs as well.

If you have pets

  1. Fill up a clean large plastic bin with water for my dog.
  2. Of course, all pets should be brought in.
  3. With all the noise from wind and rain, if your pet is crate trained it will help keep her calm to go into her crate.
  4. If you’re evacuating and you have a To-Go bag, make one for your pet – medicines, collar and leash with id, crate, towel(s), blanket or bed, favorite toy, treats, food, and water; also, any pertinent medical records (shots, medical history). Take your pet with you.
  5. Should the worse occur and your pet gets out and lost, having her microchipped will help ensure she gets home. VIP Petcare Clinic will chip them for about ~$19.  After a storm, many pet rescue groups will come in to pick up “strays”. If your pet has no microchip, they could pick her up and your pet could end up being adopted out to someone hundreds or even thousands of miles away.

Again these are unconventional advice to be followed along with conventional advice. However, this advice is just as valuable.

Database Management: SQL Joins

Please note that the following blog post provides a summary view for what you need to get done (left column) and quick examples that illustrate how to do it in SQL (right column with SQL code in red). For more information please see the resources below:

SELECT e.ename, e.deptno,  d.deptno,
  FROM emp e INNER JOIN dept d
  ON e.deptno = d.deptno
SELECT e.ename, e.sal,  s.grade
  FROM emp e INNER JOIN salgrade s
  WHERE e.sal
  BETWEEN  s.losal  AND  s.hisal

grade      losal        hisal
-----      -----        ------
1            700        1200
2           1201        1400
3           1401        2000
4           2001        3000
5           3001        9999

Gives the following solution:
ename           sal     grade
----------   --------- ---------
JAMES            950         1
SMITH            800         1
ADAMS           1100         1
Outer joins
SELECT e.ename, e.deptno,  d.deptno
  FROM emp e RIGHT JOIN dept d
  ON e.deptno = d.deptno

SELECT e.deptno,  d.deptno,
  FROM emp e LEFT JOIN dept d
  ON e.deptno = d.deptno
Self Joins
SELECT worker.ename +’ works for’+ manager.ename
  FROM emp worker, emp manger
  ON worker.mgr = manager.empno


A Letter of Gratitude to Dr. Shaila Miranda

Dr. Shaila Miranda has taught me that I am the author of my story. I have known Dr. Shaila Miranda for many years. During this period, she has outperformed as a mentor and an educator. Throughout my two years at the University of Oklahoma, Dr. Miranda has taken the initiative to know her students on a personal level. I first met Dr. Miranda at a riveting presentation she gave at the M.B.A. Program Prelude Week. After further interactions with Dr. Miranda, she inspired me to seek a dual-masters-degree, M.B.A. and M.S. in M.I.S rather than the traditional M.B.A degree. Dr. Miranda helped me realized my hidden passion for information systems [technology]. It takes an exceptional mentor to recognize and instill a vision so powerful that it can alter the course of a mentee.

A few semesters after our original meeting, she learned about a non-profit I was about to start. She saw how I leveraged social media to forward the cause. This inspired her to become a Sooner Ally, and other M.I.S. faculty followed suit. This demonstrates the passion and the conviction as an outstanding educator. Dr. Miranda is willing to listen, learn, and act based on her interactions with students just as much as she is willing to support them. She was demonstrating social awareness and became a model professor for those other professors in the department but model inclusive behavior to her students.

As one of her students, I was completely engaged in the course she was teaching. Her curriculum was remarkable, her lectures and active learning with real world data gave the class an invaluable insight. Dr. Miranda’s passion and commitment towards education could be seen throughout the semester when she sought employees from Devon, and other local companies, to help facilitate our education. This was her demonstrating managing relationships, which has allowed her to educate her students at a deeper level.

Her commitment to her students did not end at the end of the term. This was evident when she nominated me to represent the University of Oklahoma at the Information Systems and Walmart IT Summit. She coached the students individually, and as a group, to give us a competitive edge in the competition. As if that was not enough, her commitment to her students is so vast that she drove the team to Arkansas and attended our presentations with a video recorder at hand. It was with her lessons that took our team to 3rd place in the Walmart IT Summit Competition. She made us self-aware of ourselves and our surroundings; this is what gave us the competitive edge.

As graduation neared, she arranged mock interviews helped me land two job offers. Upon receiving both job offers, she assisted me in the decision-making process. She engaged my self-awareness and self-management sides of emotional intelligence to help me make the right decision. It was that decision, that got me the job I have now, that has allowed me to attend Colorado Technical University, to finally complete the doctorate. Thus, words cannot express how grateful I am for this outstanding educator. She got to know me as an individual, mentored me, and made me who I am today.

She had believed in me when others didn’t, and for that, I am grateful for it. She developed me into the person I am today, and she even provided me a key piece of advice towards my dissertation (the tool I eventually used to analyze my data), and she wasn’t even in the same school nor in my committee. She was still managing her relationships with me, beyond the years of completing my education in that department. She shows me that the boundaries of mentorships and relationships exist outside of an organization and traverses time. This is what I can learn from her, to believe in people that you lead.

Different Types of Leadership Styles

Leadership Theories:

  • Chapman and Sisodia (2015) define leadership as the value they bring to people. The author’s primary guiding value is that “We measure success by the way we touch the lives of people.” This type of leadership practice stems from treating their followers the similarly to how someone would like their kids to be treated in the work environment. This type of leadership relies on coaching the leader’s followers to build on the follower’s greatness. Then recognition is done that shake employees to the core by involving the employee’s family, so that the employee’s family could be proud of their spouse or parent. The goal of this type of leadership is to have the employee seen, valued, and heard such that they want to be their best and do their best not just for the company but for their coworkers as well.
  • Cashman (2010) defines leadership from an inside-out approach of personal mastery. This type of leadership style is focused on self-awareness of the leader’s conscious beliefs and shadow beliefs to grow and deepen the leader’s authenticity. Cashman pushes the leader to identify, reflect and recognize their core talents, values and purpose. With the purpose of any leadership is understanding “How am I going to make a difference?” and “How am I going to enhance other people’s lives?” Working from the leader’s core purpose releases more of that untapped leader’s energy to do more meaning work that frees the leader and opens leaders up to different possibilities, more so than just working towards a leader’s goals.
  • Open Leadership: Has five rules, which allow for respect and empowerment of the customers and employees, to consistently build trust, nurtures curiosity and humility, holding openness accountable, and allows for forgiving failures (Li, 2010).  These leaders must let go of the old mentality of micromanaging, because once they do let go of micromanagement, these leaders are now open to grow into new opportunities. This thought process is shares commonalities with knowledge sharing, if people were to share the knowledge that they accumulated, these people would be able to let go of your current tasks, such that these people can focus on new and better opportunities. Li stated that open Leadership allows for leaders to build, deepen, and nurture relationships with the customers and employees.  Open leadership is a theory of leadership that is customer and employee centered.
  • Values based leadership requires four principles: self-reflection, balance, humble, and self-confidence (Kraemer, 2015). Through self-reflection, leaders identify their core beliefs and values that matters to the leader. Leaders that view situations from multiple perspectives to gain a deeper understanding of the situation is considered balanced. Humility in leaders refers to not forgetting who the leader is and where the leaders come from to gain appreciation for each person. Finally, self-confidence is the leader accepting themselves as they are, warts and all.

Parts of these leadership theories that resonates

Each of these leadership theories above have a few concepts in common. Most of the leadership theories agree with each other because each leadership theory has a focus on growing the leader’s followers (Cashman, 2010; Chapman & Sisodia, 2015; Li, 2010; Kraemer, 2015). Cashman and Kraemer focuses on self-reflection, so that the leader can understand personal values, strengths, and weaknesses. For Cashman, self-reflection focuses on purpose, which is where there is an unbound level of energy. Whereas Kraemer, self-reflection focuses on defining the leader’s values and constant assessment and realigning the leader’s roles towards the leader’s value.


  • Cashman, K. (2010) Leadership from the inside out: Becoming a leader for life. (2nd ed.). San Francisco, Berrett-Koehler Publishing, Inc.
  • Chapman, B. & Sisodia, R. (2015) Everybody matters: The extraordinary power of caring for your people like family. New York, Penguin.
  • Li, C. (2010). Open Leadership: How Social Technology Can Transform the Way You Lead, (1st ed.). Vitalbook file.
  • Kraemer, H. M. J. (2015). Becoming the best. (1st ed.). New Jersey, Wiley.

Higgs Boson: Case Study on an infamous prediction that came true


  • Forecasting (business context): relies on empirical relationships that were created from observations, theory, and consistent patterns, which can have assumptions and limitations that are either known or unknown to give the future state of a certain event (Seeman, 2002). For instance forecasting, income from a simple income statement could help provide key data for how a company is operating, but the assumptions and limitations on using this method can wipe out a business (Garrett, 2013).
  • Predictions (business context): are a more general term in which, is a statement of a future state of a certain event, that can be based on empirical relationships, strategic foresight, or even scenario planning (Seeman, 2002; Ogilvy, 2015).
  • Scenarios: alternate futures that change with time as supportive and challenging forces unfold, usually containing enough data like the likelihood of success or failure, the story of the landscape, innovative opportunities, challenges to be faced, signals, etc. (Ogilvy, 2015; Wade, 2012).

Case Study: An infamous prediction that came true

The Higgs Boson helps tell the origin of mass in the universe (World Science Festival, 2013). Mass is the resistance of an object to be pushed and pulled by other objects or forces in the universe, and mass is made up of the constitute particles of that object (Greene, 2013; PBS Space-Time, 2015; World Science Festival, 2013).  The question is where does the mass of these particles that give an object its mass comes from?  The universe if filled with an invisible Higgs Field, in which these particles are swimming in and experiencing a form of resistance (when the particle speeds up or slows down), this resistance in the Higgs Field is the mass of the particles (Greene, 2013; World Science Festival, 2013).  Certain particles have mass (electrons), and others don’t (photons), this is because the certain particles interact with the invisible Higgs Field (PBS Space-Time, 2015). Scientist use the large Hadron Collider to speed up particles in such a way that when they collided in the correct way (1:1,000,000,000 chance), the particles’ collisions would be able to clump a bit of the Higgs Field to create a Higgs particle that lasted for a 10-22 second (Greene, 2013; PBS Space-Time, 2015; World Science Festival, 2013). Therefore, finding the Higgs particle is a direct link to proving that the existence of the Higgs field (PBS Space-Time, 2015).

The importance of proving this prediction correct (World Science Festival, 2013):

  • Understanding where mass comes from
  • The Higgs particle is a new form of particle that doesn’t spin
  • Shows that mathematics lead the way to discovering something about our reality

This was a prediction in the waiting to be confirmed through observation for over 50 years, which got its roots in the form of scientific and mathematical roots of quantum physics and by Higgs in 1964 (Greene, 2013; PBS Space-Time, 2015; World Science Festival, 2013).

Supporting Forces for the prediction:

  • Technological: the development of technology to study mathematics over the course of 50 years helped facilitate the discovery of this prediction (Greene, 2013; World Science Festival, 2013). The actual technology use is called the ATLAS detector attached to the Large Hadron Collider (Greene, 2013).
  • Financial: Through international collaboration from thousands of scientists and over a dozen of countries, they were able to amass the financial capital to build this $10 Billion Large Hadron Collider.


Business Intelligence: Decision Support Systems

Many years ago a measure of Business Intelligence (BI) systems was on how big the data warehouse was (McNurlin, Sprague,& Bui, 2008).   This measure made no sense, as it’s not all about the quantity of the data but the quality of the data.  A lot of bad data in the warehouse means that it will provide a lot of bad data-driven decisions. Both BI and Decision Support Systems (DSS) help provide data to support data-driven decisions.  However, McNurlin et al. (2008) state that a DSS is one of five principles of BI, along with data mining, executive information systems, expert systems, and agent-based modeling.

  • A BI strategies can include, but is not limited to data extraction, data processing, data mining, data analysis, reporting, dashboards, performance management, actionable decisions, etc. (Fayyad, Piatetsky-Shapiro, & Smyth, 1996; Padhy, Mishra, & Panigrahi, 2012; and McNurlin et al., 2008). This definition along with the fact the DSS is 1/5 principles to BI suggest that DSS was created before BI and that BI is a more new and holistic view of data-driven decision making.
  • A DSS helps execute the project, expand the strategy, improve processes, and improves quality controls in a quickly and timely fashion. Data warehouses’ main role is to support the DSS (Carter, Farmer, & Siegel, 2014).  The three components of a DSS are Data Component (comprising of databases, or data warehouse), Model Component (comprising of a Model base) and a dialog component (Software System, which a user can interact with the DSS) (McNurlin et al., 2008).

McNurlin et al (2008) state a case study, where Ore-Ida Foods, Inc. had a marketing DSS to support its data-driven decisions by looking at the: data retrieved (internal data and external market data), market analysis (was 70% of the use of their DSS, where data was combined, and relationships were discovered), and modeling (which is frequently updated).  The modeling offered great insight for the marketing management.  McNurlin et al. (2008), emphasizes that DSS tend to be defined, but heavily rely on internal data with little or some external data and that vibrational testing on the model/data is rarely done.

The incorporation of internal and external data into the data warehouse helps both BI strategies and DSS.  However, the one thing that BI strategies provide that DSS doesn’t is “What is the right data that should be collected and presented?” DSS are more of the how component, whereas BI systems generate the why, what, and how, because of their constant feedback loop back into the business and the decision makers.  This was seen in a hospital case study and was one of the main key reasons why it succeeded (Topaloglou & Barone, 2015).  As illustrated in the hospital case study, all the data types were consolidated to a unifying definition and type and had a defined roles and responsibilities assigned to it.  Each data entered into the data warehouse had a particular reason, and that was defined through interviews will all different levels of the hospital, which ranged from the business level to the process level, etc.

BI strategies can affect supply chain management in the manufacturing setting.  The 787-8, 787-9, and 787-10 Boeing Dreamliners have outsourced ~30% of its parts and components or more, this approach to outsourcing this much of a product mix is new since the current Boeing 747 is only ~5% outsourced (Yeoh, & Popovič, 2016).  As more and more companies increase their outsourcing percentages for their product mix, the more crucial it is to capture data on fault tolerances on each of those outsourced parts.  Other things that BI data could be used is to make decisions on which supplier to keep or not keep.  Companies as huge as Boeing can have multiple suppliers for the same part, if in their inventory analysis they find an unusually larger than average variance in the performance of an item: (1) they can either negotiate a lower price to overcompensate a larger than average variance, or (2) they could all together give the company a notice that if they don’t lower that variance for that part they will terminate their contract.  Same things can apply with the auto manufacturing plants or steel mills, etc.



Zeno’s Paradox

Some infinities are bigger than others.

A paradox to motion:

Zeno described a paradox of motion, which helps describes the one type of many infinities. Zeno’s paradox is described below (Stanford Encyclopedia of Philosophy, 2010):

“Imagine Achilles chasing a tortoise, and suppose that Achilles is running at 1 m/s, that the tortoise is crawling at 0.1 m/s and that the tortoise starts out 0.9 m ahead of Achilles. On the face of it Achilles should catch the tortoise after 1s, at a distance of 1m from where he starts (and so 0.1m from where the Tortoise starts). We could break Achilles’ motion up … as follows: before Achilles can catch the tortoise he must reach the point where the tortoise started. But in the time he takes to do this the tortoise crawls a little further forward. So next Achilles must reach this new point. But in the time it takes Achilles to achieve this the tortoise crawls forward a tiny bit further. And so on to infinity: every time that Achilles reaches the place where the tortoise was, the tortoise has had enough time to get a little bit further, and so Achilles has another run to make, and so Achilles has in infinite number of finite catch-ups to do before he can catch the tortoise, and so, Zeno concludes, he never catches the tortoise.”

This paradox was used to illustrate that not all infinities are the same, and one infinity can indeed be bigger than another.  An interpretation of this paradox was written poetically in a eulogy for the book of The Fault in Our Stars (Green, 2012):

“There is an infinite between 0 and 1. There’s .1 and .12 and .112 and an infinite collection of others. Of course there is a bigger infinite set of numbers between 0 and 2, or between 0 and a million. Some infinities are bigger than other infinities. … There are days, many days of them, when I resent the size of my unbounded set. I want more numbers than I’m likely to get, and God, I want more numbers for Augustus Waters than he got. But, Gus, my love, I cannot tell you how thankful I am for our little infinity. I wouldn’t trade it for the world. You have me a forever within the numbered days, and I’m grateful.” (pg. 259-260)

So to my readers out there, I want to thank you in advance for the little infinity(ies) I will get to share with each of you through this blog, and for that I am grateful.


  • Green, J. (2012). The fault in our stars.  New York, New York: Penguin Group (USA) Inc.
  • Stanford Encyclopedia of Philosophy (2010). Zeno’s Paradoxes. Retrieved from