martes, 17 de diciembre de 2019

Aplicación de la mineria de datos para la toma de decisiones en procesos de fabricación de productos farmacéuticos

Tomado de: https://www.itm.edu.co/wp-content/uploads/la-tekhne/2019/PDF-La-Tekhne-No.-106-Diciembre-de-2019-3_compressed.pdf
APPLICATION OF DATA MINING FOR DECISION MAKING IN PROCESSES OF MANUFACTURE OF PHARMACEUTICAL PRODUCTS

Stefany Paola Tirado De Stefano
Julián Alberto Uribe Gómez

At present, due to the massive increase in the amount of data that must be collected and analyzed in business environments, data mining methodologies arise, which constitute techniques that allow knowledge to be extracted from massive data sources, detecting opportunities for optimization in decision making. In this way, we seek to take advantage of the data obtained during the production process of a pharmaceutical company, focused on manufacturing physiological sera and intravenous solutions.
For this study, 2724 data were available, which contained information on product identification, production lines, production lots, lot size, type of defect found, number of defects presented and the stage in which they were detected. This is how the specific objectives with this study are: describe which line has the greatest number of defects, observe if there is an association between the production line and the type of defect found, define which of the stages of the process are the greater number of defects and identify those types of defects that are most likely to occur, whose general objective aims to design strategies that help improve the detection of anomalies within the production process. To achieve this, two strategies are proposed:
1. Classify the data by their characteristics, generating several groups within which only those data that have the required aspects are received, otherwise they are excluded and must continue to be attempted until they are admitted in one of them, this technique within the data mining It is called clustering.
2. Describe a relationship between the variables that contain the studied data, so that a production line can be associated with a type of defect, this type of analysis is known as association rules.
The procedure is as follows:
1. Prepare the data.
2. Perform cleaning and transformation process. It was determined that the variable Identification and Batch are not relevant for the analysis, so they are eliminated.
3. Make box-plot graphs of the numerical variables Size and Quantity of defects, to better demonstrate the distribution of the data, allowing to recognize the existence of outliers, which were subsequently eliminated (see figure 1).
4. Identify characteristics of the categorical variables, in this step, it was possible to define that line 6 is the one with the greatest number of defects. On the other hand, the most common types of defects are: cap particle and heat sealing bad, which belong to the review stage.
5. To observe the characteristics that lead to the generation of defects, an analysis by clustering is initially carried out, determining 8 groups (or clusters) to be created, using the k-Means method, whose purpose is to assign to each point (row) one of the k groups based on their characteristics and the distance of the point from the center (see table 1).
To perform this grouping, the Knime software was used, a data mining platform that allows developing models in a visual environment (see figure 2). The generated clusters are evaluated using the Davies-Bouldin (DB) index that indicates how compact the clusters are, for this case, the DB index shows a value of 0.348, which means that the clusters created internally have good cohesion. Similarly, the Silhouette index is used, which is used to evaluate both the cohesion and the separability of the clusters, in this case this index has a value of 0.706, which indicates a good grouping of the records.
The created clusters allow to know the characteristics of the data that are contained in them, as it is the case of cluster 0 which contains observations of size of 4641.7 units, which on average have 6.79 defective units coming from line 3, which are presented a deformed cover defect that was detected in the review period.
To evaluate the association rules, the Apriori algorithm is used, which seeks to reduce the number of candidates for the association. This technique is performed using the Python programming language that allowed to create a model with 23 rules related to lines 3, 5 and 6. One of the association rules that resulted from this process states that when it is manufactured on line 6 and initially worked with a very large lot size, there is a 79% chance of detecting a heat seal defect.
In conclusion, data mining allows describing or explaining facts from a data set. In this case a pharmaceutical company, where the main failures are generated in the operation of the machine, due to this it is important that the company constantly monitors and executes preventive maintenance. Likewise, it is recommended to make adjustments to the pressure exerted by the machine to form and unify the lid to the bag containing the solution, since from cluster 0 it was possible to know that the defect of the deformed lid was very common within the production of line 3, so it is important to evaluate the current state of the machines of this line.

_____________________________________________________________________________________

Aplicación de la minería de datos para la toma de decisiones en procesos de fabricación de productos farmacéuticos

En la actualidad, debido al incremento masivo de la cantidad de datos que deben ser recogidos y analizados en entornos empresariales, surgen metodologías de minería de datos, las cuales constituyen técnicas que permiten extraer conocimiento a partir de fuentes masivas de datos, detectando oportunidades de optimización en la toma de decisiones. De esta manera, se busca aprovechar los datos obtenidos durante el proceso de producción de una empresa farmacéutica, enfocada en fabricar sueros fisiológicos y soluciones intravenosas.

Para este estudio se disponían de 2.724 datos, que contenían información sobre identificación del producto, líneas de producción, lotes de producción, tamaño del lote, tipo de defecto encontrado, cantidad de defectos presentados y la etapa en la que fueron detectados. Es así como los objetivos específicos con este estudio son: describir cuál línea es la que presenta mayor cantidad de defectos, observar si existe una asociación entre la línea de producción y el tipo de defecto encontrado, definir cuál de las etapas del proceso se encuentran la mayor cantidad de defectos e identificar aquellos tipos de defectos que son más probables a presentarse, cuyo objetivo general apunta a diseñar estrategias que ayuden a mejorar la detec- ción de anomalías dentro del proceso de producción.

1. Clasificar los datos por sus características, generando varios grupos dentro de los cuales son recibidos solo aquellos datos que poseen los aspectos requeridos, de lo contrario son excluidos y deben seguir intentado hasta ser admitidos en uno de ellos, esta técnica dentro de la minería de datos lleva por nombre clustering.

2.Describir una relación entre las variables que contienen los datos estudiados, de forma que se pueda asociar una línea de producción con un tipo de defecto, este tipo de análisis se conoce como reglas de asociación.

El procedimiento planteado es el siguiente

Preparar los datos.

Realizar proceso de limpieza y transformación. Se determinó que la variable Identificación y Lote no son relevantes para el análisis, por lo que son eliminadas.

Realizar gráficos boxplot de las variables numéricas Tamaño y Cantidad de defectos, para evidenciar mejor la distribución de los datos, permitiendo reconocer la existencia de valores atípicos, que posteriormente fueron eliminados (ver figura 1).

Identificar características de las variables categóricas, en este pasó, se logró definir que la línea 6 es la que mayor cantidad de defectos presenta. Por otra parte, los tipos de defectos que más se repiten son: partícula tapa y el mal termosellado, que pertenecen a la etapa de revisión.

Observar las características que conducen a la genera- ción de defectos, se realiza inicialmente un análisis por clustering, determinando 8 grupos (o clústeres) a crear, utilizando el método k-Means, cuyo propósito es asignar a cada punto (fila) uno de los k grupos basados en sus características y la distancia del punto con respecto al centro (ver tabla 1).

Para realizar este agrupamiento se utilizó el software Knime, plataforma de minería de datos que permite de- sarrollar modelos en un entorno visual (ver figura 2). Los clústeres generados se evalúan utilizando el índice Davies- Bouldin (DB) que indica lo compactos que están los clústeres, para este caso, el índice DB arroja un valor de 0.348, lo que significa que los clústeres creados presentan internamente una buena cohesión. De igual forma se recurre al índice Silhouette, que es utilizado para evaluar tanto la co- hesión como la separabilidad de los clústeres, en este caso este índice tiene un valor de 0.706, lo que indica una buena agrupación de los registros.

Los clústeres creados permiten conocer las característi- cas de los datos que están contenidos en ellos, como lo es el caso del clúster 0 el cual contiene observaciones de tamaño de 4641.7 unidades, que en promedio tienen 6.79 unidades defectuosas provenientes de la línea 3, que vie- nen presentado un defecto de tapa deforme que fue detec- tado en el periodo de revisión.

Para evaluar las reglas de asociación, se utiliza el algo- ritmo Apriori, el cual busca la reducción del número de candidatos para la asociación. Esta técnica se realiza utilizando el lenguaje de programación Python que permitió crear un modelo con 23 reglas relacionadas con las líneas 3, 5 y 6. Una de las reglas de asociación que resulto de este proceso afirma que cuando se fabrica en la línea 6 y se trabajó inicialmente con un tamaño de lote muy grande, se tiene una probabilidad del 79% de detectar un defecto por termo-sellado.

En conclusión, la minería de datos permite describir o explicar hechos a partir de un conjunto de datos. En este caso una empresa farmacéutica, donde las principales fallas se generan en la operación de la máquina, es importante que la empresa controle y ejecute mantenimientos preventivos constantemente. De igual forma, se recomienda realizar ajustes a la presión ejercida por la máquina para formar y unificar la tapa a la bolsa que contiene la solución, ya que a partir del clúster 0 se logró conocer que el defecto de tapa deforme era muy común dentro de la producción de la línea 3, por lo que es importante evaluar el estado actual de las máquinas de esta línea.

Tomado de: https://www.itm.edu.co/wp-content/uploads/la-tekhne/2019/PDF-La-Tekhne-No.-106-Diciembre-de-2019-3_compressed.pdf







Modelos de halving para bitcoin: un acercamiento desde la simulación


Halving models for bitcoin: an approach from simulation

Julián Alberto Uribe Gómez

Bitcoin has been the subject of debate, multiple opinions and controversies since its creation as a cryptocurrency 10 years ago, not only because of its attractiveness as a digital asset with high returns but also for its uses, which include, among other illicit activities.
Bitcoin is the first and main currency created by a group or person called Satoshi Nakamoto, which aimed to develop a way to make online P2P payments without the need for intermediaries such as financial institutions (Nakamoto, 2008), likewise, to demonstrate the creation of value without relying on central organizations.
Bitcoins like any financial asset must be generated, as in the case of gold, that is, it must be exploited and subsequently negotiated in supply and demand processes in a specific market to obtain value, so this process of generating supply in the market Cryptoactive is known as mining, the only way in which new cryptocurrencies can be generated, this is done by solving increasingly complex mathematical problems, where thousands of mining nodes around the world compete to obtain new bitcoins.
Initially, the bitcoin protocol created by Satoshi Nakamoto included a total production or fixed offer of 21 million cryptocurrencies, this with the sole purpose of avoiding asset inflation, much of the attractiveness of bitcoin lies in the promise of convertrise into a product scarce and with a value that climbed up to 20 thousand dollars per bitcoin at the end of 2017, so chasing a deflationary model will generate its value gradually. The intensive mining of this asset and the imposed production protocol has generated a reduction process, as shown in Figure 1, a little more than 17 million bitcoins have been produced and mined, approximately 83% of the total bitcoins.
This process of reducing bitcoin mining is known as halving, which is an automated process of halving bitcoins delivered to miners as a reward for the creation of new blocks, so every 210000 blocks mined or which is equivalent to approximately every 4 years, the amount of bitcoins delivered per block is reduced, so, in 2009 the process began delivering 50 bitcoins per block, then 25 bitcoins per block, currently, 12.5 bitcoins are being mined per block, until the quota of zero bitcoins per block is reached. Estimates of when this will happen will be in the year 2140.
From the system dynamics modeler for the NETLOGO platform, some approaches to the behavior of halving or reduction processes for bitcoin can be explored, illustration 2 presents an approach mostly calculated according to the ideal behavior of the bitcoin reduction system, where As mentioned before every 4 years the number of bitcoins is reduced by half.
The behavior of these models is known as exponential decay and goal search, both behaviors can be observed in illustration 3. It is observed that theoretically through the model, in halving 34 0 bitcoins will be rewarded, time in which the offer of The cryptocurrency will have stopped.
However, this reduction process also depends on additional factors to operate, it is also a more comprehensive process within the cryptocurrency system. Mining bitcoins requires suitable locations to carry out this process, mining nodes and even equipment with specific characteristics to create the bitcoins and obtain the reward. A halving process that includes more comprehensive features can be proposed as seen in illustration 4.
The result of this model, which is represented in Figure 5, resembles the behavior observed in Figure 1, where it begins with a smooth, less steep growth and not with a slope so steep in the amount of bitcoins mined, where the total is gradually reached.
Finally, bitcoin and in general all the cryptocurrencies developed to date have managed to focus a precedent and change the way of seeing the economy, through decentralized proposals. The cryptocurrencies operate and will continue to operate because their innovative, technical and technological development has been gaining strength, all this has developed a market and the exploration of new possibilities.


El diseño experimental aplicado en procesos de enrollamiento filamentario para industrias de materiales compuestos

Tomado de:https://www.itm.edu.co/wp-content/uploads/la-tekhne/2019/PDF-La-Tekhne-No.-104-Agosto-de-2019_compressed.pdf

The experimental design applied in filamentary winding processes for composite materials industries

Julián Alberto Uribe Gómez

Composite materials are the combination of two or more materials that together provide better performance properties and functionality than each one separately. Among these materials are polymeric compounds that are composed of polymers that offer advantages over conventional materials such as: lightness, corrosion resistance, flexibility in manufacturing processes, among others
These polymers can be combined with fibers, in order to improve their properties and become composite materials, which cover a spectrum of applications ranging from structural and architectural elements for construction to applications in the aerospace industry, through the automotive industry, the construction of ships or wind power generators [1].
The filamentary winding process is one of the best known manufacturing methods with the use of glass fiber reinforced polymer composite materials, in this process, continuous reinforcements are impregnated with polymeric resin at high speeds and precisely on a mandrel or mold that rotates around its axis of rotation [1]. This process is highly automated and controlled, less labor intensive than other processes by molding. This produces laminates with high strength-to-weight ratios and can be used to produce parts with high mechanical demands [2]. The structures that can be manufactured are of revolution, with cylindrical, spherical, conical symmetry or with geodesic shapes [2] such as pipes, tanks and posts. This method has two different manufacturing conditions according to the process capacity of the company and the objective of the parts to be manufactured, these forms are continuous and discontinuous, where the difference is based on the possible forms of winding.
Figure 1 shows the basic scheme of the batch filamentary winding process, Figure 2 shows the process continuously.
Approach to experimental design and data collection
The experimental design approach starts with the identification of the variables that influence gel time. This is the time it takes for the resin to move from a liquid to a solid state and the piece is shaped, becoming practically the working time it takes to form the product. Table 1 shows the factors and factor levels that directly influence the performance and functionality of a piece manufactured by filamentary winding.
Therefore, the main objectives required in the composite materials industry are: to evaluate the effects of the factors on the response variable and to propose a mathematical model to predict the behavior of the polymeric resin in the manufacturing process.
Development of the experimental design
The experiment was proposed as a 3 ^ k Box-Behnken design with 4 factors and 3 factor levels. For this design, 81 combinations of experiments were proposed, however, due to experimental limitations, such as the time, available resources and viability of some combinations, the design was optimized to obtain 16 experiments. Based on that, the effects on the response variable were calculated, as shown in Table 2.
On the other hand, table 3 presents the analysis of variance or ANOVA table, where you can see which factors have a statistically significant effect on gel time, in this case, 5 factors have P-values ​​less than 0.05, these They are the main individual design factors.
Finally, the experimental design must be an input for decision making in industry and manufacturing, so it is important to predict gel time values, depending on the different stakeholders. In this aspect, there are two scenarios (see table 4), where it is requested to predict the optimal points for a time of 20 minutes and another of 40 minutes. In this case, R ^ 2 = 93,747 obtained in the experimental design explains how tight the model is and its reliability.
Conclusions
According to the results obtained, the factor that has a greater positive effect on the gel time of the polymeric resin is the amount of inhibitor used in manufacturing and, conversely, the other factors have negative effects on the response variable.
It is important to take into account the interactions according to the combination of the levels of the factors that are random, but the best or the most complicated options of each of them can coincide, which in practice would not be performed.

La innovación en Antioquia estudiada mediante la simulación basada en agentes


Innovation in Antioquia studied through agent-based simulation

Julián Alberto Uribe Gómez

When talking about the concept of innovation in a region, as in the case of the department of Antioquia, without a doubt, it should be directed to a more widely accepted and widespread concept such as Regional Innovation Systems (for its acronym RIS). Thus, to talk about SRIs, it must initially be defined as the infrastructure that supports innovation in the productive structure of the region, which is mainly formed by a network of relationships between the different entities or agents of public and private nature that interact in the region of Antioquia, with the objective of working from their different capacities to promote innovation.
The general structure of an SRI is defined as presented in Figure 1, where the bidirectional relationship of the 4 main entities that compose it is observed:
• Explorers: Universities and research groups.
• Exploiters: SMEs and large companies.
• Catalysts: Technology support and development centers, transfer facilitators.
• Government: Policy makers.
The theoretical development of RISs has been influenced by different schools of thought such as, for example, the school of evolutionary economics, institutional economics, new regional economies, learning economics, innovation economics and network theory (Quintero & Robledo, 2013)
Historically, the RIS of Antioquia has been developing for more than two decades after the implementation of local initiatives considering the key agents of the process as the basis of its construction. Already in the eighties, Antioquia had great strengths and a certain structure of science and technology in the academic, productive and public sectors and, for those years, it was raised as a challenge to develop a policy of science, technology and innovation (CTI) that It will revolve around the interaction between the agents.
In the 1990s, with the change in Colombia's political constitution, some attributions and functions were granted to the regions to take autonomy in decisions, to promote the development of capacities and institutions, as well as a basic infrastructure for a system of Science and innovation
In the last decade the university-company-state committee is created and linked to the regional competitiveness councils and the CTI departmental council. This makes Antioquia achieve an important development in terms of innovation among the different agents of the RIS (Llisterri & Pietrobelli, 2011)
From all of the aforementioned, the continuous and relational behavior of the RIS must then be understood, conceptualizing the system as a complex network, this implies the use of computational tools to simulate their different innovation dynamics, therefore, it has been used the NETLOGO platform to study these continuous structures and the various scenarios.
Mainly, the model (see figure 2) has two input variables that are the percentage of R&D and the number of agents (Companies, Universities, Transfer Centers and Policy Generators) in the system, and as output variables , the model has the number of scientific publications and patents generated over a period of time, as indicators of the generation of innovation activities within the region.
According to the simulations carried out with the model (see figure 3), it has been concluded that the greatest generation of innovation indicators occur in the Medellín area, since most of the agents immersed in the system converge there. Other agents to a lesser extent such as SMEs are outside the Medellin area, where some of them do not participate as active actors in the innovation process.
In addition to this, incremental results in the indicators are due to directly proportional relationships to greater clusters and interrelationships among agents, as well as to a greater number of explorers participating in the system and to a greater extent greater percentages of R&D in the region.

Aplicabilidad del diseño experimental en la agroindustria


Métodos de búsqueda de conocimiento científico


METHODS OF SCIENTIFIC KNOWLEDGE SEARCH

Julián Alberto Uribe Gómez

Scientific research has a preponderant role in the sciences, since this is the one that allows theorizing and achieving results of exploration, due to the systematic, systematic and experimental study of phenomena, through the application of the scientific method.
The scientific method has been, therefore, a methodology, as well as, an indispensable tool in research, used in multiple fields of knowledge and in the improvement of reality [1], by diverse personalities throughout history.
The history of science and technology has witnessed many scientific milestones related to various types of knowledge search methods, since not always the way to reach the result follows the same route or is in the same way, however, What is certain is that the explicit or tacit basis will always be the scientific method and its stages: problematic, search, collection and analysis of information, hypothesis and verification [2].
From this, several methods of knowledge search are classified, where historical personalities celebrated in different branches of science are presented with their scientific objective as an example to said method.
The examples and each one of the methods previously numbered, are a sample of the way as before a problem, this can be approached from different perspectives. Science and its history is versatile and full of debates about it, its continuous evolution has allowed us to question developments and postulate more adjusted principles which have generated transformative active knowledge [3].
Ideas such as mechanism and determinism in past epochs allowed solid scientific foundations to be put in place to put the sciences into operation, however challenging these assumptions through other theories such as the principle of causality, the principle of sufficient reason, the principle of indeterminacy, chance and unpredictability among others, have generated substantial debates for the progress and good of science and technology.



La adopción tecnológica: entre dos paradigmas de simulación


Technological adoption: between two simulation paradigms

Julián Alberto Uribe Gómez

Technological adoption is perhaps one of the best known and widely researched social phenomena in the academic world, for the study of cycles of acceptance and dissemination of new products or services. A known example is the diffusion model of Bass, which seeks to study the behavior between innovators and imitators. This model was developed in 1969 [1] and to date it is still valid and widely applied for the empirical study of the diffusion models of new technologies and innovations in marketing, strategy, technological administration, among others [2].

The principle by which Bass's model is governed is based on the existence of a system with two possible states: potential adopters and adopters, where the group of potential adopters pass to the group of adopters when they acquire through the purchase or use of a product or innovative service technology for the first time [3].

The Bass model uses three main parameters: potential market (potential adopters), contact coefficient, and adoption coefficient. This has led to mathematical formulations that come from physics and biology to model the phenomenon as follows [3]:

Analytically, the model offers a predictive panorama of the behavior when generating numerical outputs of the phenomenon, likewise, with this the variables and their relationships can be represented, but it is necessary to repeat the solution several times to be able to trace the trajectories of the adoption phenomenon. However, equational models have found support in computational and simulation models to represent phenomena, not only analytically but also descriptively, especially under two paradigms: system dynamics and agent-based models.

Each simulation paradigm provides support to understand the behavior of social and technological phenomena from two perspectives: a strategic and a tactical one. In this way, these methodologies can be found that help the analyst to model these situations according to his research, academic or professional need.

To model the diffusion phenomenon of Bass through system dynamics, the causal diagrams are used to represent the multiple relationships between variables. To simulate their descriptive behavior, flow and level diagrams are used. With the objective of creating the model, the web platform https://insightmaker.com/ was used, which is an open access platform that allows you to quickly create models under the system dynamics paradigm, which can be seen in Figure 1.

To represent the adoption model through the agent-based simulation paradigm, unlike system dynamics, several aspects need to be taken into account:
1. Generate rules of behavior to the entities to simulate to obtain macro behaviors.
2. Define the interaction environment of the agents.
3. Schedule the simulation and its states.
In this case, the NETLOGO platform https://ccl.northwestern.edu/netlogo/ was used for modeling, which is free to use and designed for the study of this paradigm and was used to generate the proposed model, which was You can see in figure 2.
Simulating technological adoption through the two built models can show similar behaviors in their results, however, the difference between the behaviors is due to the fact that the agent-based simulation models are mainly of discrete order, while the dynamic models of systems are continuous. As can be seen in Figures 3 and 4, both generate the well-known "S" curves of Bass's analytical model, giving the analytical theory the descriptive component of their behavior.
Now, both simulation paradigms turn out to be descriptors of the phenomenon, however, in agent-based modeling the interaction of agents during the phenomenon can be observed, as can be seen in Figure 5, and manipulate the environment in which agents find to build various adoption scenarios.
In conclusion:
Simulation paradigms can be very useful tools when describing and studying the behavior of phenomena, which are sometimes complex. These paradigms have several levels of abstraction to represent situations according to the need for analysis, thus, in system dynamics it is necessary to understand at a macro, strategic and high level of abstraction, while agent-based modeling does not only support a high level of abstraction, but you can also interact directly with the entities playing with their rules of behavior and with multiple scenarios.

Simulado y modelación basada en agentes: Un mundo por explorar


Simulated and agent-based modeling: a world to explore

Julián Alberto Uribe Gómez

When you hear about the word "agent" what do you think? What does it evoke? Sometimes with the renowned Oscar-winning 4-movie “The Matrix” and his trilogy, very famous for his antagonist agent “Smith”. It was there that they began to talk about this concept in a fairly common way, however, quite apart from the fictional concept that is handled there is an academic concept that has been exploited for decades by researchers and scientists.
It can be said that the concept of “agent” was born in the 60s, under the influence of the LOGO programming language. This language and its platform had as its main objective the education and teaching of programming (Pea, 2007), apparently in a very didactic way for the time. At that time "agents" were not programmed but "turtles" were programmed, and with simple commands or controls the "turtle" executed a series of orders and movements.
This resulted in much more powerful applications, one of them is currently the NETLOGO platform. In this platform the concept of initial “turtle” is still being implemented, however, being an academically accepted tool for programming events of individual entities, the “turtle” changes to the name of “agent”.
NETLOGO as an educational and procedural platform, was created with the same educational principle as the LOGO, where children and adults can learn equally and no previous programming bases are required, in addition the program is an open-source platform and is available on the link https://ccl.northwestern.edu/netlogo/.
Platforms such as NETLOGO over the years have incorporated various tools that enhance their usefulness in different areas of knowledge, platforms like this are known as multiparadigma tools, because they allow to explore phenomena that include biology, physics, chemistry, psychology, networks, computer science, economics and others, under aspects such as educational, procedural and simulation, and enter what is currently known as studies of emerging phenomena or behaviors.
Under this perspective, studies of complex phenomena, modeling and simulation of them have begun to gain great importance. Where the latter presents an option to improve decision making and reduce response times to situations of conflict and uncertainty (Viveros & Chew, 2013), likewise, it has been shown that traditional and analytical mathematics finds difficulties in establishing relationships and Solutions in the short term. Some examples where simulation and “agents” have been used are models of innovation systems, diffusion and adoption of technology, social networks, epidemics and viruses, behaviors of insect colonies and land traffic, among others.
With this in mind, then what is an "agent"? An "agent" is a heterogeneous object or entity, with a set of states or rules, that exhibits pre-programmed behavior to perform specific tasks in a given environment. However, an agent is programmed to be autonomous, reliable and learn (Foner, n.d.) when in interaction with other agents in the system. As examples of agents we have: ants, people, cars, companies, computers, birds.
A characteristic example to represent agents and study emergent behavior can be represented in the following situation of medical application: Consider a tissue that is being affected by a virus. It reproduces very quickly and spreads through the tissue without allowing recovery. In this case the agents represented in the system are: the tissue and the number of viruses found, with the following behaviors:
Figure 1 represents the initial phase of the preparation of the simulation of the tissue-virus system, the points are shown as the viral agents which are on the representation of the tissue.
To start the simulation there are two important moments: the first one is the preparation of the simulation environment, the second moment involves entering the simulation phase, in which the orders to the tissue and virus agents are executed, resulting in the following dynamics represented in figures 2 and 3.
In Figure 4, after 500 simulation runs, the behavior of the system can be seen. It is appreciated that at the beginning of the graph there is a bacterial and inner tissue growth due to the consumption of the outer tissue. This favors the reproduction of the virus, while the outer tissue decreases. The system reaches a moment of equilibrium between the agents.
If this system in equilibrium is injected with the effect of an enzyme that functions as an antibody, one can study how the system changes and its behavior. Therefore, another agent called enzyme will be defined with the rule: vaccinate-tissue, so we get the result represented in Figure 5.
With the injection of the enzyme into the system the viral agent is attacked by decreasing it and recovering the affected tissue.
The simulation presents benefits to all areas of science, since it allows developing experiments with minimal risks and anticipating positive or negative behaviors, all this combined with the paradigm of agent-based models allows to explore and understand global phenomena arising from individual behaviors.

Propuesta de aplicación de la minería de datos para la toma de decisiones en la prestación del servicio de parqueo en la ciudad de Medellín



La complejidad empresarial


LA COMPLEJIDAD EMPRESARIAL

Por: Julián Alberto Uribe Gómez

Se usa la expresión complejidad para referirse al estado en el cual muchos factores diferentes interactúan entre sí. Sin embargo, no hay acuerdo ni definición claramente valida sobre este término, optándose por sinónimos, como comportamiento complejo. 

Son precisamente estos comportamientos encontrados en cada uno de los factores o entidades, llámese hormigas, neuronas, ciudades o comunidades, lo que los hace complejos en sus interrelaciones. Es por esta razón que medir la complejidad se ha tornado una tarea ardua. No obstante, las áreas afines a la computación, tratando de definir la complejidad han buscado el modo de medirla, y esto lo han logrado a través de ver todo como información cuantificable.

Una de las formas halladas para medir el nivel de complejidad de un conjunto de factores ha sido responder la siguiente pregunta ¿cuánta información se requiere para describir este sistema? Esto ha conducido a matematizar y simular procesos computacionales, esto ha significado que un sistema se puede representar como un algoritmo, es decir, un programa de computadora.

Ahora bien, al hablar de empresas y los factores que la componen, esta se puede ejemplificar y entender como una tela, donde las fibras o áreas no están dispuestas al azar, sino que están organizadas en función de una unidad sintética en la que cada área contribuye al conjunto. Por lo tanto, la empresa es un fenómeno que no puede ser explicado por ninguna ley simple y determinista, así muchos procesos y áreas empresariales construyen, integran o adaptan paquetes para el manejo de su información, siendo esta una fuente altamente requerida y necesaria para el entendimiento de los procesos. La complejidad entonces explica cómo cada área genera información para definirse en un entramado conjunto con otras áreas, para lograr su máximo desempeño. Esto significa visualizar la empresa como un sistema (ver figura 1).

Las empresas ya sean productoras de bienes o servicios, son organismos con procesos de retroalimentación, es decir, ellas auto-producen, se auto-organizan,  se auto-mantienen, se auto-reparan frente a situaciones de cambio, y de acuerdo al manejo administrativo que tengan pueden auto-desarrollarse.

Figura 1. Sistema empresarial. Elaboración Autor.

Para entender la complejidad empresarial es necesario comprender los tres tipos de causalidades que se generan al interior de la empresa:

·      Causalidad lineal: con la calidad del producto, aplicando un proceso de transformación, se produce un objeto de ventas. El proceso genera una causalidad lineal, es decir, “tal cosa, produce tales efectos” (ver figura 2).

Figura 2. Ejemplo causalidad lineal. Elaboración autor

·      Causalidad circular retroactiva: un proceso empresarial necesita ser regulada. Debe llevar a cabo su transformación en función de necesidades externas, de su calidad de producto y de su calidad deseada; sin embargo hacerlo bien o mal, administrar los procesos bien o mal, influyen positiva o negativamente en la calidad de la empresa (ver figura 3).

 

Figura 3. Ejemplo Causalidad retroactiva. Elaboración autor.

·      Causalidad recursiva: en este caso los efectos y las causas son necesarios para el proceso que los genera, porque implica una retroalimentación de acuerdo a los resultados obtenidos (ver figura 4).

Figura 4. Ejemplo Causalidad recursiva. Elaboración autor.

Las empresas como organismos complejos se encuentran fuertemente ligados a su entorno y se organizan de acuerdo a las dinámicas de su mercado, el cual es un fenómeno con características similares, es decir, organizado y aleatorio. De esta manera, la empresa se encuentra fuertemente ligada a dos paradigmas: el orden y el desorden. El primero se concibe como todo aquello que es repetitivo, constante e invariable; por su parte el segundo está ligado a la irregularidad y la desviación con respecto a una estructura dada. Las empresas constantemente se encuentran oscilando entre ambas estructuras, puesto que el orden permite tener unos lineamientos y una estructura organizativa sólida, pero al mismo tiempo el desorden incentiva el desarrollo de la estrategia, la autonomía, la evolución y el cambio.

Al fluctuar la empresa entre el orden y el caos surgen dos efectos bastante importantes, por una lado se encuentra el programa que responde al orden empresarial, esta es entendida como una secuencia de acciones predeterminadas que deben funcionar en circunstancias que permitan el logro de objetivos, sin embargo éste sólo funcionaría en el caso de que dichas circunstancias sean favorables para la organización. Por otra parte, surge la estrategia como respuesta al desorden o caos, esta permite elaborar escenarios de acción posibles, preparándose al suceso de algo inesperado o aleatorio. 

De acuerdo a lo anterior, el programa permite explorar campañas basadas en la economía, puesto que busca ahorrar tiempo y recursos al momento de atender a las necesidades de la empresa, pero con la limitante de que sólo funciona en situaciones ideales, mientras que la estrategia permitirá plasticidad y capacidad de adaptación ante sucesos adversos que son muy comunes en un mercado tan cambiante como el actual.

Referencias consultadas

Morin, E. (2018a). El paradigma de complejidad. Retrieved December 20, 2018, from http://docplayer.es/1870397-El-paradigma-de-complejidad.html

Morin, E. (2018b). La complejidad y la empresa. Retrieved December 20, 2018, from http://docplayer.es/20887857-La-complejidad-y-la-empresa.html

Sametband, M. (1999). Entre el orden y el caos: la complejidad (Primera Ed). Fondo de Cultura Económica.






BUSINESS COMPLEXITY

Julián Alberto Uribe Gómez

The term complexity is used to refer to the state in which many different factors interact with each other. However, there is no clearly valid agreement or definition on this term, choosing synonyms as complex behavior.

It is precisely these behaviors found in each of the factors or entities, call themselves ants, neurons, cities or communities, which makes them complex in their interrelations. It is for this reason that measuring complexity has become an arduous task. However, areas related to computing, trying to define complexity have sought ways to measure it, and this they have achieved through seeing everything as quantifiable information.

One of the ways found to measure the level of complexity of a set of factors has been to answer the following question: how much information is required to describe this system? This has led to mathematizing and simulating computational processes, this has meant that a system can be represented as an algorithm, that is, a computer program.

Now, when talking about companies and the factors that compose it, it can be exemplified and understood as a fabric, where the fibers or areas are not randomly arranged, but are organized according to a synthetic unit in which each area Contributes to the whole. Therefore, the company is a phenomenon that cannot be explained by any simple and deterministic law, so many business processes and areas build, integrate or adapt packages for the handling of their information, being this a highly required and necessary source for the Process understanding The complexity then explains how each area generates information to be defined in a joint framework with other areas, to achieve maximum performance. This means visualizing the company as a system (see figure 1).

Companies either producing goods or services, are organizations with feedback processes, that is, they self-produce, self-organize, self-maintain, self-repair in the face of change situations, and according to management administrative they have can develop themselves.

To understand business complexity it is necessary to understand the three types of causalities that are generated within the company:

• Linear causality: with the quality of the product, applying a transformation process, a sales object is produced. The process generates a linear causality, that is, "such a thing produces such effects" (see figure 2).
• Retroactive circular causality: a business process needs to be regulated. It must carry out its transformation according to external needs, its product quality and its desired quality; However, doing it right or wrong, managing the processes right or wrong, positively or negatively influence the quality of the company (see figure 3).
• Recursive causality: in this case the effects and causes are necessary for the process that generates them, because it implies a feedback according to the results obtained (see figure 4).
Companies as complex organizations are strongly linked to their environment and are organized according to the dynamics of their market, which is a phenomenon with similar characteristics, that is, organized and random. In this way, the company is strongly linked to two paradigms: order and disorder. The first is conceived as everything that is repetitive, constant and invariable; the second one is linked to irregularity and deviation from a given structure. Companies constantly find themselves oscillating between the two structures, since order allows for a solid organizational structure and guidelines, but at the same time the disorder encourages the development of the strategy, autonomy, evolution and change.
When the company fluctuates between order and chaos, two quite important effects arise, on the one hand there is the program that responds to the business order, this is understood as a sequence of predetermined actions that must work in circumstances that allow the achievement of objectives, However, this would only work if these circumstances are favorable to the organization. On the other hand, the strategy arises in response to the disorder or chaos, this allows to develop possible action scenarios, preparing for the occurrence of something unexpected or random.

According to the above, the program allows to explore campaigns based on the economy, since it seeks to save time and resources when attending to the needs of the company, but with the limitation that it only works in ideal situations, while the strategy It will allow plasticity and adaptability to adverse events that are very common in a market as changing as the current one.

Operación de la gestión tecnológica en la empresa: segunda parte


OPERACIÓN DE LA GESTIÓN TECNOLOGICA EN LA EMPRESA SEGUNDA PARTE II

Por: Julián Alberto Uribe Gómez

Continuando con los puntos característicos de la gestión tecnológica dentro de la operación empresarial y teniendo en cuenta el impacto de las decisiones tecnológicas en diferentes tipos de empresas, esta segunda parte expondra los patrones condicionales del comportamiento empresarial de la gestión tecnológica que se deben establecer mediante un ejemplo frecuente en los paises latinoamericanos:

El industrial productor nacional de una PYME actúa bajo las condiciones que se describen a continuación y que inciden en sus decisiones tecnológicas:

·      Escasez de recursos financieros propios.
·      Dificultades y limitaciones para tener acceso a recursos de créditos de fomento.
·      Poca información sobre el “estado del arte” de su industria, a nivel nacional y mundial.
·      Información precaria sobre mercados de sus productos y servicios.
·      Capacidad mínima o nula de pronóstico sobre esos mercados.
·      Fuerte competencia con otros productores.
·      Versatilidad para producir diversos artículos.
·      Evasión frente a instrumentos tributarios.
·      Posibilidades de especialización y complementaridad con sus competidores.
·      Know-how requerido sencillo y abierto.
·      Menor vigilancia y control de parte del gobierno.
·      Escaso margen de libertad para cambiar de proveedores de materias primas.
·      Posición competida como comprador de materias primas.
·      Bajo consumo de energía con relación al valor de la producción.
·      Margen considerable para mecanizar o automatizar labores manuales.
·      Pocas necesidades de personal altamente calificado.

Por otro lado, el administrador o industrial de una empresa multinacional o transnacional toma decisiones tecnológicas en condiciones distintas al anterior, por ejemplo:

·      Amplia disponibilidad de recursos financieros.
·      Amplia accesibilidad al mercado local.
·      Amplio conocimiento sobre el “estado del arte” de su industria y buena capacidad de pronóstico.
·      Capacidad de impartir altos niveles de especialización y de calificación a su personal.

Perfiles de esta naturaleza deberían ser elaborados para buscar efectos significativos en cualquier industria. Los resultados no solo dependen de instrumentos y mecanismos sino tambien del comportamiento de cada uno de los distintos gestores tecnológicos y de su importancia económica y social a nivel país.

El caso de una empresa grande, de propietarios y capital nacional es diferente. Alli se dan por lo general las siguientes realidades, que determinan un patrón peculiar en términos de gestión tecnológica, tales realidades, que afectan a uno u otro de los factores de la tecnología son las siguientes:

 

·       Se tiene un acceso más fácil al crédito para el desarrollo de sectores.

·       Hay abundante información sobre el “estado del arte” propio de la actividad de que se trate, aun a nivel mundial.

·       La información sobre el mercado interno es bastante aceptable.

·       Suele ser la clase de empresa que trabaja en sectores cuyo mercado tiene oferta oligopolica o monopolica.

·       Tienen mayor vigilancia y control del estado y mayor presión tributaria.

·       Cuentan con más recursos técnicos que les permiten ser autosuficientes en actividades técnicas corrientes.

·       Estan usualmente en sectores de producción, donde el factor energético es un factor crítico.

·       La tecnología en estos sectores no se prestan a una sustituibilidad amplia entre máquinas y personas.

·       Estas empresas están sujetas a la exigencia fuerte de rentabilidad por los inversionistas para poder subsistir.

Casos típicos son el de una empresa agricola nacional bien organizada, industrias manufactureras de capital nacional, empresas modernas privadas de transporte terrestre o empresas constructoras.

La empresa pública es casi siempre una empresa grande, más o menos modernizada, que actúa en ciertos sectores básicos de la economía de bienes o servicios. Para este tipo de empresas, la gestión tecnológica se desarrolla bajo circunstancias más genéricas, que suelen ser las siguientes:

 

·      Hay disponibilidad de capital financiero propio.
·      Siendo gubernamental, no hay exigencias fuertes para obtener utilidades.
·      Hay exigencias de operar con seguridad y continuidad.
·      Se tiene un buen conocimiento sobre el “estado del arte” de su propia actividad y hay buena capacidad de pronóstico sobre su evolución.
·      Disponen de servicios técnicos internos relativamente amplios.
·      Pueden seleccionar personal técnico superior de alta calidad.
·      Disponen de amplia autonomía interna en materia de decisiones técnicas.
·      Los precios de sus productos o sus servicios suelen estar controlados o ser administrados desde la política gubernamental.

El último tipo de empresa son las formadas por capital público y capital privado, por lo tanto conviene destacar las condiciones bajo las cuales se realiza la gestión tecnológica en empresas de carácter mixto:

·      Facilidad de adquirir capital financiero propio, como capital de riesgo.
·      Es frecuente que trabajen como monopolio o como parte de un oligopolio, debido a la magnitud de sus operaciones.
·      Poseen una gran fuerza como compradoras de materias primas.
·      Sus actividades se suelen desenvolver en sectores económicos intrínsecamente intensivo en capital.

·      Reunen información bastante completa sobre el “estado del arte” de su actividad, y sobre los mercados nacionales e internacionales, tanto de sus productos como de sus materias primas. Así mismo cuentan con buena capacidad de pronóstico.

·      Está sujeta a la exigencia de rendir utilidades, lo cual le exige también la máxíma eficiencia técnico económica.

Para el gestor o administrador tecnológico es importante conocer el sistema y las condiciones iniciales en las cuales se deben tomar las decisiones tecnológicas, y de esta forma aprovechar de la mejor manera sus ventajas y fortalecer sus desventajas.

Bibliografía para ampliación del tema

Ahumada Tello, E., & Perusquia Velasco, J. M. A. (2016). Inteligencia de negocios: estrategia para el desarrollo de competitividad en empresas de base tecnológica. Contaduría Y Administración. https://doi.org/10.1016/j.cya.2015.09.006

León, J. G. M., & Valenzuela, A. V. (2014). Aprendizaje, innovación y gestión tecnológica en la pequeña empresa: Un estudio de las industrias metalmecánica y de tecnologías de información en Sonora. Contaduría Y Administración. https://doi.org/10.1016/S0186-1042(14)70162-7

Ochoa, M. B., Valdés, M., & Quevedo, Y. (2007). Innovación , tecnología y gestión tecnológica. Acimed16(4).

Robledo, J. (2010). Introducción a la Gestión Tecnológica. Universidad Nacional de Colombia, Sede Medellín.





MANAGEMENT OF TECHNOLOGY OPERATION  IN THE COMPANY PART II

Julián Alberto Uribe Gómez

Continuing with the characteristic points of the technological management within the business operation and taking into account the impact of the technological decisions in different types of companies, this second part will expose the conditional patterns of the business behavior of the technological management that must be established through a frequent example in Latin American countries:

The national industrial producer of an SME acts under the conditions described below and that affect its technological decisions:

• Shortage of own financial resources.
• Difficulties and limitations to have access to development credit resources.
• Little information about the “state of the art” of your industry, nationally and worldwide.
• Precarious information about markets for their products and services.
• Minimum or zero forecast capacity on these markets.
• Strong competition with other producers.
• Versatility to produce various items.
• Evasion against tax instruments.
• Possibilities of specialization and complementarity with its competitors.
• Know-how required simple and open.
• Less surveillance and control by the government.
• Little margin of freedom to change suppliers of raw materials.
• Position competed as a buyer of raw materials.
• Low energy consumption in relation to the value of production.
• Considerable margin to mechanize or automate manual labor.
• Few needs of highly qualified personnel.

On the other hand, the administrator or industrial of a multinational or transnational company makes technological decisions in conditions different from the previous one, for example:

• Wide availability of financial resources.
• Wide accessibility to the local market.
• Extensive knowledge about the “state of the art” of your industry and good forecasting ability.
• Ability to impart high levels of specialization and qualification to its staff.

Profiles of this nature should be developed to look for significant effects in any industry. The results depend not only on instruments and mechanisms but also on the behavior of each of the different technology managers and their economic and social importance at the country level.

The case of a large company, of owners and national capital is different. There are usually the following realities, which determine a peculiar pattern in terms of technological management, such realities, which affect one or the other of the technology factors are the following:

• There is easier access to credit for the development of sectors.
• There is abundant information about the “state of the art” typical of the activity in question, even worldwide.
• Information about the internal market is quite acceptable.
• It is usually the kind of company that works in sectors whose market has an oligopoly or monopoly offer.
• They have greater surveillance and control of the state and greater tax pressure.
• They have more technical resources that allow them to be self-sufficient in current technical activities.
• They are usually in production sectors, where the energy factor is a critical factor.
• Technology in these sectors does not lend itself to broad substitutability between machines and people.
• These companies are subject to the strong profitability requirement by investors in order to survive.

Typical cases are that of a well-organized national agricultural company, manufacturing industries of national capital, modern private land transport companies or construction companies.

The public company is almost always a large, more or less modernized company, which operates in certain basic sectors of the economy of goods or services. For this type of companies, technological management is developed under more generic circumstances, which are usually the following:

• There is availability of own financial capital.
• Being governmental, there are no strong demands to obtain profits.
• There are requirements to operate safely and continuously.
• You have a good knowledge about the “state of the art” of your own activity and there is good ability to forecast your evolution.
• They have relatively extensive internal technical services.
• They can select superior technical personnel of high quality.
• They have wide internal autonomy in terms of technical decisions.
• The prices of their products or their services are usually controlled or managed from government policy.

The last type of company are those formed by public capital and private capital, therefore it is worth highlighting the conditions under which technological management is carried out in mixed companies:

• Ease of acquiring own financial capital, as risk capital.
• They often work as a monopoly or as part of an oligopoly, due to the magnitude of their operations.
• They have great strength as buyers of raw materials.
• Its activities are usually carried out in economic sectors intrinsically capital intensive.
• They gather quite complete information about the “state of the art” of their activity, and about national and international markets, both of their products and their raw materials. They also have a good forecasting capacity.
• It is subject to the requirement to pay profits, which also requires the highest economic technical efficiency.

For the manager or technological administrator it is important to know the system and the initial conditions in which the technological decisions must be made, and in this way take advantage of the advantages and strengthen their disadvantages.