Jekyll2023-07-25T17:15:39+00:00http://www.gunfus.com/feed.xmle-com-chef blogAll my recipies and aventure about cooking ecommerce.Angel Veragunfus@gmail.comWelcome to Maven2023-05-02T00:00:00+00:002023-05-02T00:00:00+00:00http://www.gunfus.com/welcome_to_maven<h1 id="intro-to-maven">Intro to Maven</h1>
<p>Maven is a build, packaging utility that is wiedly adopted. I have known about it for a long time but never really had a need or wanted to learn. That changed TODAY!. I come from a Java developer and ANT background, with a lot of expertize in Eclipse.</p>
<p>There lots of tutorial out there in the web that explain how to build a Hello world sample, I started with this one from Spring.io <a href="https://spring.io/guides/gs/maven/">https://spring.io/guides/gs/maven/</a> I liked it because it was very simple to folllow. As I completed the tutorial and I used ‘mvn compile’ things didn’t work and that is were my adventure for this blog post starts… where did I go wrong? is the tutorial wrong? did I miss a step? and also where is this compile target, in ant we have Target tasks, so where is the compile target?</p>
<h1 id="changing-the-maven-default-src-path">Changing the maven default src path</h1>
<p>I love Eclipse, I been with it since its begining when I had to download it as a Alpha release and Learn it because it was the next thing that my team was going to be supporting. Up to this day I still like it better than some of the old java IDE predecesors.</p>
<p>Because I have started this tutorial with a Simple Java project in Eclipse, the structure of the files do not follow the maven convention of ‘src/main/java/hello’ so Maven didn’t know what to build. After lots of research I found I can add to the build section before the <plugin> tag but after the <build> one.</build></plugin></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> <resources>
<resource>
<directory>src/</directory>
</resource>
</resources>
</code></pre></div></div>
<p>That helped to get the code compiled. But where is the compile target, I still do not understand how I can add dependancies and why is there a section called plugin that seems to be the doing the packaging?</p>
<h1 id="breaking-down-the-pomxml">Breaking down the pom.xml</h1>
<p>The pom.xml is the main file for Maven, it is used to defined how to compile the app, how to package, what to build and include, how to test it, how to deploy it, in other words IT IS THE FILE!, you know.</p>
<h2 id="definition">Definition</h2>
<p>To understand it better, POM stands for Project Object Modeler which helps understand why it is the ONE file that has everything for your project. It defines the project in a object way, if that makes sense. Thus in this file you will find all sorts of tags that help provide the definition of the project:</p>
<ul>
<li>What is the name of the project</li>
<li>What version</li>
<li>Where is the source of the project</li>
<li>Is the source in 1.8 java</li>
<li>What compiler version do we want to use?</li>
<li>Where is the unit test?</li>
<li>How to package the code</li>
<li>…</li>
</ul>
<h3 id="so-what-we-know-about-pom-so-far">So what we know about POM so far?</h3>
<ul>
<li>They are the one file that defines your project as an object, with its properties.</li>
<li>They contain all information required to compile, test and deploy the project, if nedeed</li>
</ul>
<h2 id="inheritance">Inheritance</h2>
<p>Let introduce now the concept of inheritance. POM.xml have inheritance, meaning that you can have child pom.xml, in fact if you have only one single pom.xml in a “round about way” that one pom.xml is already a child pom because there are are properties defined by default or as maven calls it the <a href="https://maven.apache.org/guides/introduction/introduction-to-the-pom.html#super-pom">Super POM</a>. Properties that you don’t define in your file it will use its defaults ones in the Super POM, you can find the Super POM at: <a href="https://maven.apache.org/ref/3.6.3/maven-model-builder/super-pom.html">maven.apache.org/ref/3.6.3/maven-model-builder/super-pom.html</a></p>
<p>To apply a practical example, let’s look at a scenario of creating a simple java project in eclipse. I created a pom.xml to go along with my simple java project to use maven. In my pom.xml, the path to find where the java source is defined is not defined anywhere, so maven then finds it from the Super POM, the elements that is in use here is <sourceDirectory>. In the Super POM the <sourceDirectory> points to a different directory than the one that eclipse uses for Java projects. In Eclipse we use `/src` and in maven it wants to be using `/src/main/java`. I am aware that there is a option to create a Maven Java Project but I decided to not use that.</sourceDirectory></sourceDirectory></p>
<p>The important section of the pom.xml related to the path is below, and I have also added the complete pom.xml file with added comments to explain the basic lines. Lets look into the the <build> section, and lets skip over all the details until next section subtitle, for now lets keep it simple. To configure maven to find the source file in a different directory I added the following one line to my pom.xml, you can refer to the entire pom.xml in the section below to see where is added:</build></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> <sourceDirectory>src/</sourceDirectory>
</code></pre></div></div>
<p>This tells maven where to find the source of the files to compile. The directory is relative to where the pom.xml file is located. If we do not define this property or overwrite it in our pom.xml it would had used the one from the super-pom.xml.</p>
<p>Our complete pom.xml looks like:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code><?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<!-- Group or organization that the project belongs to, change this to your com.own.company -->
<groupId>com.hcl.commerce.avera</groupId>
<!-- name/id of the project -->
<artifactId>maven_tutorial</artifactId>
<!-- The artifact that is being built -->
<packaging>jar</packaging>
<!-- Version number of this project -->
<version>0.0.1</version>
<!-- Other properties that can be defined, the example shows that you need java 1.8 to com -->
<properties>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
</properties>
<build>
<sourceDirectory>src/</sourceDirectory> <!-- Added so that maven knows that were are using eclipse source directory -->
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>3.2.4</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<transformers>
<transformer
implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
<mainClass>hello.HelloWorld</mainClass>
</transformer>
</transformers>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
</code></pre></div></div>
<h1 id="the-maven-phases-the-build-lifecycle">The maven phases (The build lifecycle)</h1>
<p>In the previous section we glanced over the build section to look into the build > sourceDirectory section to explain the inheritance and show how if a property is not there, it uses the from defined by its parent or the default. This will become more applicable as you move forward and have to define yourself with child pom.xml. Let me now introduce you to the basic lifecycle.</p>
<p>According to <a href="https://maven.apache.org/guides/introduction/introduction-to-the-lifecycle.html">The maven docs</a> there are 7 phases predefined already, <code class="language-plaintext highlighter-rouge">build</code> is not one of them, and this really threw me off, but I think this is a me problem.</p>
<ol>
<li>validate - validate the project is correct and all necessary information is available</li>
<li>compile - compile the source code of the project</li>
<li>test - test the compiled source code using a suitable unit testing framework. These tests should not require the code be packaged or deployed</li>
<li>package - take the compiled code and package it in its distributable format, such as a JAR.</li>
<li>verify - run any checks on results of integration tests to ensure quality criteria are met</li>
<li>install - install the package into the local repository, for use as a dependency in other projects locally</li>
<li>deploy - done in the build environment, copies the final package to the remote repository for sharing with other developers and projects.</li>
</ol>
<p>Each of this can be called invdividually when doing a mvn to build the project. Just doing <code class="language-plaintext highlighter-rouge">mvn</code> will give you an error as you didn’t specify a goal or phase. In the sample above you will have to call mvn so that it goes through the steps phase that you need it to, in our case if we want to confirm the build of the java files, then you want to call: <code class="language-plaintext highlighter-rouge">mvn compile</code> DUH! build=compile… so in fact all the phases make a lot of sense when you sit down and think about it. If you want to have the jar version then <code class="language-plaintext highlighter-rouge">mvn package</code>, and etc.</p>
<p>With this I conclude my introduction to maven project. I think maven is a great tool to build java projects but at first it can be quiet hard to think on this organized way, but this goes back to the basic of Software Engineering which was part of my 400 courses when I was in university.</p>Angel Veragunfus@gmail.comIntro to MavenAdding comments to blog2022-11-04T00:00:00+00:002022-11-04T00:00:00+00:00http://www.gunfus.com/Adding_comments_to_blog<p>I a few years ago I tried to implement comments for the blog but that time there was only a few ways to do it with jekyll minimal mistakes type of blog, and I never got it working. You needed a servless app and I just got busy with other things, move forward a couple of years and features and github now supports Apps and Discussion threads <a href="https://giscus.app/">giscus.app</a></p>
<p><a href="https://giscus.app/">giscus.app</a> is small app that helps integrate Github dicussions features with blog.</p>
<p>Given I was running a very old version of jekyll and minimal mistakes, so I upgraded and <a href="https://www.urbandictionary.com/define.php?term=ba%20da%20bing%20ba%20da%20boom">ba da bing ba da boom</a>… I now have comments on the site.</p>
<p>Disclaimer: I reserve the right to allow or block your post. Please be courteous, no need to beat on anyone’s point of view when we are all unique.</p>Angel Veragunfus@gmail.comI a few years ago I tried to implement comments for the blog but that time there was only a few ways to do it with jekyll minimal mistakes type of blog, and I never got it working. You needed a servless app and I just got busy with other things, move forward a couple of years and features and github now supports Apps and Discussion threads giscus.appkdb to pem2022-10-25T00:00:00+00:002022-10-25T00:00:00+00:00http://www.gunfus.com/kdb_to_pem<p>In a recent engagement I was assisting a collegue with some OpenShift work. He wanted to create a Route that re-encrypt the content before sending it over to the HCL Commerce containers.</p>
<p>My task was to figure out how to extract the private key from a set of file given by the security team. The Key and Certificate were required to be able to <a href="https://docs.openshift.com/container-platform/4.7/networking/routes/secured-routes.html">create a re-encrypt route with a custom certificate</a>.</p>
<p>In this blog I want to keep it simple and describe the process that I used to extract the key and certificate and create a PEM file so that it can be used to create the OpenShift Route.</p>
<p>I will skip over the extended explanation of the requirements, along with the complications of why would you want to do such setup. The requirement as mentioned was to figure out how to get the Certificate and the Key in PEM format out of set of given files.</p>
<h1 id="the-archive-file">The Archive File</h1>
<p>Without disclousing the information of the client and files provided. Let’s say that I received a set of files in an archive format similar to the ones below:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>theArchive.zip
├── aBinaryArchiveFile.p7b
├── aServerName.txt
└── aServerName.kdb
</code></pre></div></div>
<p>And the task was to find out the certificate and key and then extract it, and provide it into a PEM file. Remember we have to find the Key and the Certificate.</p>
<h1 id="what-is-a-pem-file">What is a PEM file?</h1>
<p>PEM stands for <em>Privacy Enhanced Mail</em> and is a type file that uses a common standard format in cryptography for sharing Keys and Certificates.</p>
<p>PEM files are readable to the naked eye, thus can be opened using your favorite text editor.</p>
<p>Care should be taken when the content of the file is a private keys.</p>
<p>For more information about <a href="https://en.wikipedia.org/wiki/Privacy-Enhanced_Mail">PEM refer to Wikipedia</a>, surprisingly is very simplistic.</p>
<h1 id="p7b-file">.p7b file</h1>
<p>This file is a Windows binary format of the Certificate and the Key, you can open the file in any Windows and browse and export the certficate or chain certificate contained in it.</p>
<p>In Linux you can also list the certificates using the openssl tool (for security reasons I don’t show the content of the file, but believe me it works :))</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>openssl pkcs7 -inform DER -outform PEM -in aBinaryArchiveFile.p7b -print_certs > theCert.cer
more theCert.cer
</code></pre></div></div>
<h1 id="kdb-file">.kdb file</h1>
<p>This file is a database file that holds keys and certificates and it can be password protected. It is frenquently refered as the data store. I was not able to find the specifications webpage for it, but is a very commonly used file to transfer and hold certificates and keys.</p>
<h1 id="transformation-into-pem">Transformation into PEM</h1>
<p>With this two files I was able to extract the certificate and the key. You only need the .kdb file as long as you have the password. To extract the data out of the KDB file you first need to convert it into a p12 file.</p>
<p>To do this transformation you need to make use of the Key Management tool. When using HCL Commerce you can use the ts-util that has the command line tool that comes embedded with IBM WebSphere Application Server. There are two options the GUI or the CMD version.</p>
<p>You can refer to the <a href="https://www.ibm.com/docs/en/SSYKE2_8.0.0/com.ibm.java.security.component.80.doc/security-component/iKeyman.8.User.Guide.pdf">IBM Manual on how to use the Key Management Tool</a> or ikeyman for short.</p>
<p>Using the ts-util container for Commerce, below is the sequence of steps to use.</p>
<h2 id="first-list-the-certificate-in-the-keystore-this-will-give-you-the-label-to-use-in-other-cmds">First, list the certificate in the keystore, this will give you the “label” to use in other cmds.</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>./ikeycmd -DADD_CMS_SERVICE_PROVIDER_ENABLED=true -cert -list CA -db /tmp/aServerName.kdb
</code></pre></div></div>
<h2 id="second-transform-the-the-cert-and-key-into-a-p12-type-of-file">Second, transform the the cert and key into a .p12 type of file</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>./ikeycmd -DADD_CMS_SERVICE_PROVIDER_ENABLED=true -cert -export -db /tmp/aServerName.kdb -pw 1234 -label "THE_LABEL_OF_THE_CERT_AS_STEP_1" -target /tmp/aServerName.p12 -target_pw 1234 -target_type pkcs12
</code></pre></div></div>
<h2 id="lastly-using-the-openssl-tool-export-it-to-a-pem-file">Lastly, using the Openssl Tool export it to a pem file</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>openssl pkcs12 -in aServerName.p12 -nodes -nocerts -out aServerName_key.pem
</code></pre></div></div>
<p>You can transform the pcks12 into a PEM file</p>
<ul>
<li>For exporting only the key use: openssl pkcs12 -in aServerNamekey.p12 -out aServerName_key.pem -nocerts -nodes</li>
<li>For exporting only the cert use: openssl pkcs12 -in aServerNameKey.p12 -out aServerName_cert.pem -clcerts -nokeys</li>
</ul>Angel Veragunfus@gmail.comIn a recent engagement I was assisting a collegue with some OpenShift work. He wanted to create a Route that re-encrypt the content before sending it over to the HCL Commerce containers.Multi Page Apps, Single Page Apps or TransitionalApps2022-05-12T00:00:00+00:002022-05-12T00:00:00+00:00http://www.gunfus.com/MPA_SPA_TransitionalApps<p>Today is one of those days, that I am posting as a purpose of using this blog as my notepad.</p>
<p>When talking about WEB APPS there are 3 main different apps types that often I have to refer to.</p>
<ul>
<li>
<p>MPA = <strong>Multi Page Apps</strong>: Your traditional static using technologies such as, but not limited to: HTML, JSP, Servlet or similar. It refers to a page by page browsing. For HCL Commerce this is what Aurora, Madison, Elite and others.</p>
</li>
<li>
<p>SPA = <strong>Singe Page Apps</strong>: Your more updated website that uses Node.js, React.js or similar, again non-restricting list. It refers to a page deveoped for much of its parts(if not all) in JavaScript. For HCL Commerce this refers to new Saphhire or Emerald stores.</p>
</li>
<li>
<p><code class="language-plaintext highlighter-rouge">null</code> = <strong>Transitional Web Apps</strong>: Refers to an hybrid, an in between approach.</p>
</li>
</ul>
<p>I am still learning about this new concept of Transitional Web Apps, but here is a video that a friend recently posted and I want to repost and take a note.</p>
<p>You can try writing your transitional app at <a href="https://svelte.dev/">https://svelte.dev/</a> you will notice that it has the look and feel of writing a SPA, and there is plugins for vscode.</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/860d8usGC0o" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>Angel Veragunfus@gmail.comToday is one of those days, that I am posting as a purpose of using this blog as my notepad.Installing Kibana2022-02-16T00:00:00+00:002022-02-16T00:00:00+00:00http://www.gunfus.com/installing_kibana<p>HCL Commerce Search 9.1 uses <a href="https://www.elastic.co/">Elasticsearch</a> to hold all the index data. With this move Elasticsearch now comes a very important piece of the solution and as such it should be monitor and tuned correctly to your site so that it performs optimally. I plan to make a future to provide you with some information on how to tune Elasticsearch, for now we I will concentrate on how to install kibana which is a tool that help you to monitor your Elasticsearch cluster.</p>
<p>In this post I plan to document the steps that I followed to install kibana using the bitnami helm charts. During this exercise I enlisted the help of my good friend Raimee as I knew he had done it before, and indeed he did. Is always good to have a wingman!</p>
<p>Most of the install that we do in K8S are straight forward, you find the helm charts, follow the install instructions, but every once in a while there is a bit of gotcha, I am happy to report that there wasn’t too many on this kibana install process.</p>
<h1 id="assumptions">Assumptions</h1>
<p>We are starting from the point that you already have a version of Elasticsearch installed and working. In my case I had Elasticsearch installed under a namespace <strong><em>elastic</em></strong>.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>> kubectl get namespaces
NAME STATUS AGE
elastic Active 209d
</code></pre></div></div>
<h1 id="0-official-documentation-from-bitnami">0. Official documentation from bitnami</h1>
<ul>
<li>Read through the following official information:
<a href="https://github.com/bitnami/charts/tree/master/bitnami/kibana/">https://github.com/bitnami/charts/tree/master/bitnami/kibana/</a></li>
<li>Know what version of Elasticsearch you are using to find out what version of Kibana you need. Use this matrix to help you find the kibana version that you will need for a particular Elasticsearch (click on <strong><em>Product compatibility</em></strong>): <a href="https://www.elastic.co/support/matrix">https://www.elastic.co/support/matrix</a></li>
</ul>
<h1 id="1-add-the-bitnami-repo">1. Add the bitnami repo</h1>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>helm repo add bitnami https://charts.bitnami.com/bitnami
</code></pre></div></div>
<h1 id="2-install-kibana">2. Install kibana</h1>
<ol>
<li>
<p>Find out the service of your elasticsearch: <code class="language-plaintext highlighter-rouge">kubectl get svc -n elastic</code></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>angelvera@Angels-MBP ~ % kubectl get svc -n elastic
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hcl-commerce-elasticsearch ClusterIP 23.345.123.230 <none> 9200/TCP,9300/TCP 209d
hcl-commerce-elasticsearch-headless ClusterIP None <none> 9200/TCP,9300/TCP 209d
</code></pre></div> </div>
</li>
<li>
<p>In our case we needed kibana 7.12, so we use the following command (you will need to modify the <em>host</em>, <em>port</em> and <em>image.tag</em>)</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> helm install my-kibana bitnami/kibana -n elastic --set "elasticsearch.hosts[0]=hcl-commerce-elasticsearch,elasticsearch.port=9200" --set image.tag=7.12.0
</code></pre></div> </div>
</li>
</ol>
<h1 id="3-testing-kibana">3. Testing kibana</h1>
<p>To ensure kibana comes up correctly by monitoring the log (this is your POD name: <em>my-kibana-7c9959548b-lmf27</em>): <code class="language-plaintext highlighter-rouge">kubectl logs -n elastic my-kibana-7c9959548b-lmf27 -f</code></p>
<p>If you see a message like <strong><em>is incompatible with the following Elasticsearch nodes in your cluster</em></strong> that is an error about the version of Kibana not able to work with the Elasticsearch version installed. Refer to the Step 0 and find out what version you need to install using the image.tag attribute.</p>
<ol>
<li>
<p>To see the Kibana UI, you will use port-forward:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl port-forward svc/my-kibana 5601 -n elastic
</code></pre></div> </div>
</li>
<li>Launch your browser and go to <a href="http://localhost:5601/">http://localhost:5601/</a></li>
<li>Click <strong><em>Explore on my own</em></strong></li>
<li>Open the hamburger menu on the top left.</li>
<li>Scroll all the way down to <strong><em>Stack Monitoring</em></strong></li>
</ol>
<p>From there you should be able to see something like the screenshot below, where you can navigate to your Nodes or Indices and use the different graphs to monitor the state of your Elasticsearch cluster.</p>
<p><img src="/assets/2022/hcl_commerce/kibana_712_ui.png" alt="Kibana UI" /></p>Angel Veragunfus@gmail.comHCL Commerce Search 9.1 uses Elasticsearch to hold all the index data. With this move Elasticsearch now comes a very important piece of the solution and as such it should be monitor and tuned correctly to your site so that it performs optimally. I plan to make a future to provide you with some information on how to tune Elasticsearch, for now we I will concentrate on how to install kibana which is a tool that help you to monitor your Elasticsearch cluster.Docker problems with internet connectivity2022-01-27T00:00:00+00:002022-01-27T00:00:00+00:00http://www.gunfus.com/docker_no_connectivity<p>Today I am creating this post as a way to remind me and also to spread the knowledge of this one problem that keeps coming back to me every few months.</p>
<p>Let me first describe the problem, not sure if it is related to the version of docker that I am on or the OS/DOCKER combination that I am using either way here it goes.</p>
<p>The problem was brought to my attention as “Hey [sous-chef] using the jenkins container I can’t seem to clone the repo”, so that is when I had to get into action and bring up all my cooking tools and knowledge.</p>
<p>As I went on to test, first I needed to recreate the problem, and I noticed that the jenkins container was deployed in docker using docker-compose. Second I noticed that the git repository in question was outside of the docker local network.</p>
<p>With that information I used my handy dandy set of tools nc to test the connectivity and my packets where not reaching the outside world. I then tried outside of the container in the Host computer and things worked okay. After playing around with the network settings and IP rules, looking into the log, bringing down/up the container, there was nothing that seemed to be working. Connection between the container appeared to be fine so it was just the internet connection from inside the container.</p>
<p>That is when Google came to help… and after a few minutes of research I found:</p>
<p>(https://stackoverflow.com/questions/39828185/no-internet-connection-inside-docker-containers)[
https://stackoverflow.com/questions/39828185/no-internet-connection-inside-docker-containers]</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>pkill docker
iptables -t nat -F
ifconfig docker0 down
brctl delbr docker0
docker -d
</code></pre></div></div>
<p>And to ensure completeness on this record, I am using:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>::::::::::::::
/etc/os-release
::::::::::::::
NAME="Red Hat Enterprise Linux"
VERSION="8.3 (Ootpa)"
</code></pre></div></div>
<p>Docker version</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Docker version 20.10.5, build 55c4c88
</code></pre></div></div>
<p>This problem seems to happen every couple of months, thus I am making recording this as a post so that I can persist the knowledge in my head.</p>Angel Veragunfus@gmail.comToday I am creating this post as a way to remind me and also to spread the knowledge of this one problem that keeps coming back to me every few months.Adding a New Field to the Index2021-06-22T00:00:00+00:002021-06-22T00:00:00+00:00http://www.gunfus.com/adding_new_mappings<p>In previous post I have covered the new architecture of HCL Commerce 9.1 Search when deployed with Elastic Search based solution, ingest, query, NiFi, Elastic Search, Query and others. In this post we will use a magnifying glass to dive deeper and we will concentrate on one: Elastic Search. Through out life I have never stopped my inner child, it always shows a passion at being curious. I am keen to understand the core concepts, the basics, How-is-it-made!</p>
<p>This is a bit of a long introduction but stay with me..<a href="#step-1:-create-an-index">skip the intro</a> let’s state what we are going to do and what we are not doing. In a future post I hope to cover how to apply what you learn in this post to HCL Commerce in the meantime you can use this post to help you understand better what you are doing on the <a href="https://help.hcltechsw.com/commerce/9.1.0/tutorials/tutorial/tsd_search6_intro_elastic.html">Profit Margin tutorial on the Help Center</a> more specifically on the step related to <a href="https://help.hcltechsw.com/commerce/9.1.0/tutorials/tutorial/tsd_connectorconfigure_elastic.html">Configuring Nifi</a> in the part where you are updating the Processor to <em>Populate Index Schema</em>.</p>
<p>In Step 1, in this post we create a new index. Using a new empty index helps me break the steps and provide insights about the simplicity of the process. The same concepts can be applied when using more complex index like the one provided with HCL Commerce, just be aware that in our case we are creating our own index and when applying the steps to HCL Commerce, the index is already created for you.</p>
<p>In Step 2, we add some data, because having an empty index is boring. Step 3, then modifies the mappings or better known as the schema and finally in Step 4, we add the data that maps to the new schema object added.</p>
<p>You can use the documentation directly from elastic search here to understand how things work: https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-put-mapping.html</p>
<p>With our magnifying glass looking into HCL Commerce 9.1 Search, we look into the Elasticsearch(ES) server only and make request directly to the ES server mine is deployed at: http://gunfus_es:30200/</p>
<p>But in this post we are removing NIFI, and the Ingest container we are only going to be working with the Elasticsearch container. The intention of this post is just to explain how to add/update an already existing Elasticsearch index with a new field that in the Elasticsearch terms is reffered to as a (object field)[https://www.elastic.co/guide/en/elasticsearch/reference/7.13/object.html]. Because we are using just Elasticsearch to explain the process lets create an index that uses a small mappings to explain all steps.</p>
<p>You can use postman or a curl cmd for all the APIs call here, I will post the CURL command.</p>
<h1 id="step-1-create-an-index">Step 1: Create an Index</h1>
<p>In this step we want to create an index with a simple mapping. In HCL Commerce v9.1 the index are already created for you, so although in this post we use this step, in the HCL Commerce world you can skip this step.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>curl --location --request PUT 'http://gunfus_es:30200/.av.index' \
--header 'Content-Type: application/json' \
--data-raw '{
"mappings" : {
"properties": {
"user": {
"type": "keyword"
}
}
}
}'
</code></pre></div></div>
<p>The expected response is:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>{
"acknowledged": true,
"shards_acknowledged": true,
"index": ".av.index"
}
</code></pre></div></div>
<p>You can request your index by get a list of the indeces using <code class="language-plaintext highlighter-rouge">_cat/indices</code> as documented here: https://www.elastic.co/guide/en/elasticsearch/reference/7.13/cat-indices.htm`</p>
<h1 id="step-2-add-data-into-the-index">Step 2: Add data into the index</h1>
<p>This step is not required but for the sake of providing a complete example, let’s add some data into the index. The data can match the index schema that we just created, such data structure would look like:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>curl --location --request PUT 'http://gunfus_es:30200/.av.index/_doc/1' \
--header 'Content-Type: application/json' \
--data-raw '{
"user": "gunfus"
}'
</code></pre></div></div>
<p>The expected response is:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>{
"_index": ".av.index",
"_type": "_doc",
"_id": "1",
"_version": 1,
"result": "created",
"_shards": {
"total": 2,
"successful": 1,
"failed": 0
},
"_seq_no": 0,
"_primary_term": 1
}
</code></pre></div></div>
<h1 id="step-3-add-the-new-field-to-an-already-existing-schema-with-an-object-field">Step 3: Add the new field to an already existing schema with an object field</h1>
<p>This is the part that is not done by HCL Commerce and is also the part that we are interested to show in this post.</p>
<p>At this time our index is very simple and it only has one object field, of type (keyword)[https://www.elastic.co/guide/en/elasticsearch/reference/7.13/keyword.html].</p>
<p>To add a new type we follow this document from the Elasticsearch documentation: (Updating mapping API)[https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-put-mapping.html]. In our case we are going to add a child Object Type under the user. Notice that we are adding <code class="language-plaintext highlighter-rouge">user.fields.name</code> and not just <code class="language-plaintext highlighter-rouge">user.name</code>. This is because upon trying to add <code class="language-plaintext highlighter-rouge">user.name</code> I encountered errors, didn’t do much research but <code class="language-plaintext highlighter-rouge">user.fields.name</code> is what the Elasticsearch tutorial does so we followed along.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>curl --location --request PUT 'http://gunfus_es:30200/.av.index/_mappings' \
--header 'Content-Type: application/json' \
--data-raw '{
"properties": {
"user": {
"type": "keyword",
"fields": {
"name": {
"type": "keyword"
}
}
}
}
}'
</code></pre></div></div>
<p>The expected response is very verbose about the action</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>{
"_index": ".av.index",
"_type": "_doc",
"_id": "1",
"_version": 1,
"result": "created",
"_shards": {
"total": 2,
"successful": 1,
"failed": 0
},
"_seq_no": 0,
"_primary_term": 1
}
</code></pre></div></div>
<h1 id="step-4-adding-data-into-the-new-data-structure">Step 4: Adding data into the new data structure</h1>
<p>At this time our index schema now has 3 object types: user, user.field and user.field.name. So we can now insert data in user.field.name.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>curl --location --request PUT 'http://rhettnifi:30200/.av.index/_doc/1' \
--header 'Content-Type: application/json' \
--data-raw '{
"user": "gunfus",
"fields": {
"name": "firstNameGunfus"
}
}'
</code></pre></div></div>Angel Veragunfus@gmail.comIn previous post I have covered the new architecture of HCL Commerce 9.1 Search when deployed with Elastic Search based solution, ingest, query, NiFi, Elastic Search, Query and others. In this post we will use a magnifying glass to dive deeper and we will concentrate on one: Elastic Search. Through out life I have never stopped my inner child, it always shows a passion at being curious. I am keen to understand the core concepts, the basics, How-is-it-made!Meet HCL Commerce Search 9.1 - Part 22020-09-17T00:00:00+00:002020-09-17T00:00:00+00:00http://www.gunfus.com/Meet_Search_commerce9111_part2<p>This is part 2 of a series of article that aim to introduce the new feature in HCL Commerce Search that uses Elasticsearch as an engine. <a href="/Meet_Search_Commerce9111_part1/">Part 1</a> talked about the overall solution. In this post we will focus on two components ingest and query, I will provide a high level description and then show how to turn trace on for those components. But before jumping into the main topic…</p>
<p>Notice that I am not just saying HCL Commerce with Elasticsearch and that is intentional. HCL Commerce search have added benefits on top of Elasticsearch, HCL Commerce does not uses Elasticsearch standalone, there are added components and features on top of Elasticsearch. Thus it is not HCL Commerce + Elasticsearch, it is HCL Commerce + HCL Commerce Search, and HCL Commerce Search, uses Elasticsearch as a data layer. It is important to understand that unlike its old Solr based solution, the adoption of Elasticsearch has been architected to ensure a clean separation and cut of the components, this allows for better component isolation making HCL Commerce Search leap into the new era of Microservices.</p>
<p><img src="/assets/2020/hcl_commerce/search_part2.png" alt="Ingest and Query highlighted in the overall architecture" /></p>
<p>Let’s get to the topic on this post, the ingest and the query services are two of the added components which enable HCL Commerce Search to work with elasticsearch. You can read the detailed description of this services in the HCL Commerce Help Center at: <a href="https://help.hcltechsw.com/commerce/9.1.0/search/concepts/csdsearchingest.html">Using the V9.1 HCL Commerce Search service</a> I only intend to describe it here at high level and then jump into how to turn on the trace on:</p>
<ul>
<li>They each are a container on its own</li>
<li>They can be scaled individually to support HA and Load Balance</li>
</ul>
<h1 id="the-query-container">The <strong><em>query</em></strong> container</h1>
<ul>
<li>Full description and details at: <a href="https://help.hcltechsw.com/commerce/9.1.0/search/concepts/csdelasticsearchquery.html">The Query Service</a></li>
<li>It builds the query and then talks to Elasticsearch for consumption of the query</li>
<li>Is used to extract information from elasticsearch and making it ready for the API</li>
<li>It has support for natural language processor (NLP)</li>
<li>It has extensions that can participate as part of a specific query</li>
<li>It is implemented using <a href="https://spring.io/guides/gs/spring-boot/">spring-boots</a></li>
</ul>
<h1 id="the-ingest-container">The <strong><em>ingest</em></strong> container</h1>
<ul>
<li>Full description and details at: <a href="https://help.hcltechsw.com/commerce/9.1.0/search/concepts/csdsearchconnectors.html">The Ingest service</a></li>
<li>Is used to consume data from different sources and populate the elasticsearch index. The dataflow and transformation of such data is controlled using the NIFI Pipeline.</li>
<li>Ts-app calls ingest for index building, and status of the index build job</li>
<li>It talks to Registry, the Registry is a complementary application to NIFI: https://nifi.apache.org/registry</li>
<li>It talks to Zookeeper, for storage of the configuration and other details of the connectors</li>
<li>It talks to NIFI, to execute the process and interface with the pipeline in a graphical way</li>
<li>It talks to elasticsearch</li>
<li>is responsible for creating the connectors in NIFI using the Registry</li>
<li>It is implemented using <a href="https://spring.io/guides/gs/spring-boot/">spring-boots</a></li>
</ul>
<h1 id="turning-trace-on-hcl-commerce-search">Turning Trace on HCL Commerce Search</h1>
<p>The development team for HCL Commerce Search continues to work hard implementing new features, one of the feature that is having a lot of attention is serviceability and I know they will be driving important changes when it comes to logging. For now you can turn on logging and tracing in two different ways:</p>
<ol>
<li>the documented way in the HCL Commerce Help Center: <a href="https://help.hcltechsw.com/commerce/9.1.0/search/refs/rsdingest_troubleshooting.html?hl=log">Logging and troubleshooting the Ingest and Query services</a></li>
<li>by turning the spring-boots logging.</li>
</ol>
<p>Turning on spring-boots looging will require that you deploy a new image. As you need to add an environment flag that enables the tracing for spring boots. You can add that flag in your helm charts or the docker-compose yaml. In K8S using the helm chart you will do the following:</p>
<ol>
<li>Open the chart.yaml</li>
<li>Find the <code class="language-plaintext highlighter-rouge">ingestApp:</code> section</li>
<li>Add to the <code class="language-plaintext highlighter-rouge">envParameters:</code> section: <code class="language-plaintext highlighter-rouge">logging.level.org.springframework: DEBUG</code></li>
<li>Redeploy the container.</li>
</ol>
<p>The result should be:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> ingestApp:
name: ingest-app
image: search-ingest-app
tag: 9.1.1.0
replica: 1
resources:
requests:
memory: 2048Mi
cpu: 500m
limits:
memory: 4096Mi
cpu: 2
## when using custom envParameters, use key: value format
envParameters: {
logging.level.org.springframework: DEBUG
}
nodeLabel: ""
fileBeatConfigMap: ""
</code></pre></div></div>Angel Veragunfus@gmail.comThis is part 2 of a series of article that aim to introduce the new feature in HCL Commerce Search that uses Elasticsearch as an engine. Part 1 talked about the overall solution. In this post we will focus on two components ingest and query, I will provide a high level description and then show how to turn trace on for those components. But before jumping into the main topic…Meet HCL Commerce Search 9.1 - Part 12020-09-01T00:00:00+00:002020-09-01T00:00:00+00:00http://www.gunfus.com/Meet_Search_Commerce9111_part1<p>In this post I introduce the new HCL Commerce Search 9.1 solution. In <strong><em>part 1</em></strong>(this post) I will cover the new components of search, such as Apache NiFi and Elasticsearch and in <a href="http://www.gunfus.com/Meet_Search_commerce9111_part2/">part 2</a> I will cover how to enable Trace.</p>
<p>The HCL Commerce 9.1 release involved a huge effort from many people in the product team and as they continue to come up with updated releases that effort is on-going. The changes that went into the platform feel like a renewed product, a fresh start in many aspects, a much needed change. Many clients struggled with preprocessing/indexing and search relevancy issues with Solr and the new search solution provides some exciting improvemments in those areas.</p>
<p>Many of us are navigating through this new path with a blind fold due to the updated technologies in the new search solution, such as: Apache Nifi, the Ingest and the Query Services. Not everyone embraces changes as new opportunities, but I do. In this post I want to share some interesting aspects of the new HCL Commerce 9.1 Search.</p>
<p>IBM Websphere Commerce introduced IBM Commerce Search in v7 FEP2 and subsequently RESTified and separated the search service from the transaction server in v7 FEP7, that marked a transition point to a new era. A new era where the search component is now a separate component and it is as important as the Database. Fast forward to 2020, HCL Commerce in 9.1 adopted a more cloud ready platform by hooking it up with Elasticsearch and splitting the server based approach that was designed with Solr to a multiple services. The services are now split into a container architecture: Nifi, Query, Ingest, Elasticsearch, Kafka, Zookeeper. All of these components work together to provide the HCL Commerce Search solution in 9.1.</p>
<p>Without going too much into detail <a href="https://www.google.com/search?q=What+is+elastic+search">Elasticsearch</a> is a search engine based on the <a href="https://www.google.com/search?q=What+is+apache+lucene">Apache lucene project</a>, Solr was also based on <a href="https://www.google.com/search?q=What+is+apache+lucene">Apache lucene project</a> but that is where the similarities end.</p>
<p>Moving from Solr to Elasticsearch will require carefully planning and thorough analysis of your current search services and customizations. My colleague Rhett Daniel recently posted a blog that touches on considerations in moving to the new search solution: <a href="https://support.hcltechsw.com/community?id=community_blog&sys_id=adb71920db729850a45ad9fcd39619cf">The Journey to New HCL Commerce V9.1 Search</a></p>
<p>During the deploying <a href="https://help.hcltechsw.com/commerce/9.1.0/install/tasks/tdeploykubern91-commerce.html">HCL Commerce v9.1 on a Kubernetes cluster</a>, early in the process you are faced with the task of deciding whether you will use Elasticsearch or Solr. If you select Elasticsearch, the first step is to install the Elasticsearch component:</p>
<h1 id="first-step-related-to-elasticsearch-installing-elasticsearch">First step related to Elasticsearch: Installing Elasticsearch</h1>
<p>Following the steps from the Help Center in the topic <strong><em>Deploying HCL Commerce on a Kubernetes cluster</em></strong>, see the section on <a href="https://help.hcltechsw.com/commerce/9.1.0/install/tasks/tdeploykubern91-commerce.html">“Deploy Elasticsearch.”</a></p>
<ol>
<li><code class="language-plaintext highlighter-rouge">kubectl create ns elastic</code></li>
<li><code class="language-plaintext highlighter-rouge">helm repo add elastic https://helm.elastic.co</code></li>
<li><code class="language-plaintext highlighter-rouge">helm repo update</code></li>
<li>In the bundle for the HELM Charts, navigate to: <code class="language-plaintext highlighter-rouge">hcl-commerce-helmchart/sample_values/elasticsearch-values.yaml</code>, there is a sample file to use for your Elastisearch deployment</li>
<li><code class="language-plaintext highlighter-rouge">helm install elasticsearch elastic/elasticsearch -n elastic -f localvalues.yaml</code></li>
<li>Ensure that container is running: <code class="language-plaintext highlighter-rouge">kubectl get pods -n elastic</code></li>
</ol>
<h1 id="second-step-related-to-elasticsearch-modifying-the-commerce-helm-charts">Second step related to Elasticsearch: Modifying the Commerce Helm charts</h1>
<p>Later in the <strong><em>Deploying HCL Commerce on a Kubernetes cluster</em></strong>, after completing other install procedures, when you will get to the Commerce Helm charts where you need to specify that you are deploying with Elasticsearch, note the following property/value pairs controls the deployment in the values.yaml file.</p>
<table>
<thead>
<tr>
<th>package</th>
<th>helm file</th>
<th>property</th>
</tr>
</thead>
<tbody>
<tr>
<td>HCLCommerceDevOps</td>
<td><code class="language-plaintext highlighter-rouge">hcl-commerce-helmchart/stable/hcl-commerce</code></td>
<td><code class="language-plaintext highlighter-rouge">searchEngine: elastic</code></td>
</tr>
</tbody>
</table>
<p>After modifying the Helm charts you will proceed to install the shared commerce components for HCL Commerce, for more information on the what is shared refer to the <a href="https://help.hcltechsw.com/commerce/9.1.0/install/refs/riginfrastructure.html">HCL Commerce production environment overview</a></p>
<p>As you can see the the enablement of HCL Commerce Search with Elasticsearch is straight forward. There is a few extra containers that come to play to make the entire solution work refer to <a href="https://help.hcltechsw.com/commerce/9.1.0/search/concepts/csdsearchingest.html">Using the V9.1 HCL Commerce Search service</a>:</p>
<ul>
<li><a href="https://help.hcltechsw.com/commerce/9.1.0/search/concepts/csdsearchconnectors.html">Ingest</a> for loading data</li>
<li><a href="https://help.hcltechsw.com/commerce/9.1.0/search/concepts/csdelasticsearchquery.html">Query</a> for getting data</li>
<li><a href="https://nifi.apache.org/">NiFi</a> for visualizing, modifying and controlling the pipeline of services that are used to load the data and make it consumable by Elasticsearch</li>
<li><a href="https://www.elastic.co/elasticsearch/">Elasticsearch</a> for storing the data and the core search engine</li>
<li><a href="https://kafka.apache.org/">Kafka</a>/<a href="https://zookeeper.apache.org/">Zookeeper</a> for storing and sharing the configuration</li>
</ul>
<p>Using the HCL Commerce with Elasticsearch will require some new terminology, and new practices. <a href="https://support.hcltechsw.com/community?id=community_blog&sys_id=adb71920db729850a45ad9fcd39619cf">The Journey to New HCL Commerce V9.1 Search</a> together with the part 1 of this article will provide you with some high level understanding of the new components in play. Stay tuned for part 2 of this post on <strong><em>“Meet HCL Commerce 9.1 Search”</em></strong> series where I will discuss how to turn on debugging in the new containers.</p>Angel Veragunfus@gmail.comIn this post I introduce the new HCL Commerce Search 9.1 solution. In part 1(this post) I will cover the new components of search, such as Apache NiFi and Elasticsearch and in part 2 I will cover how to enable Trace. The HCL Commerce 9.1 release involved a huge effort from many people in the product team and as they continue to come up with updated releases that effort is on-going. The changes that went into the platform feel like a renewed product, a fresh start in many aspects, a much needed change. Many clients struggled with preprocessing/indexing and search relevancy issues with Solr and the new search solution provides some exciting improvemments in those areas. Many of us are navigating through this new path with a blind fold due to the updated technologies in the new search solution, such as: Apache Nifi, the Ingest and the Query Services. Not everyone embraces changes as new opportunities, but I do. In this post I want to share some interesting aspects of the new HCL Commerce 9.1 Search. IBM Websphere Commerce introduced IBM Commerce Search in v7 FEP2 and subsequently RESTified and separated the search service from the transaction server in v7 FEP7, that marked a transition point to a new era. A new era where the search component is now a separate component and it is as important as the Database. Fast forward to 2020, HCL Commerce in 9.1 adopted a more cloud ready platform by hooking it up with Elasticsearch and splitting the server based approach that was designed with Solr to a multiple services. The services are now split into a container architecture: Nifi, Query, Ingest, Elasticsearch, Kafka, Zookeeper. All of these components work together to provide the HCL Commerce Search solution in 9.1. Without going too much into detail Elasticsearch is a search engine based on the Apache lucene project, Solr was also based on Apache lucene project but that is where the similarities end. Moving from Solr to Elasticsearch will require carefully planning and thorough analysis of your current search services and customizations. My colleague Rhett Daniel recently posted a blog that touches on considerations in moving to the new search solution: The Journey to New HCL Commerce V9.1 Search During the deploying HCL Commerce v9.1 on a Kubernetes cluster, early in the process you are faced with the task of deciding whether you will use Elasticsearch or Solr. If you select Elasticsearch, the first step is to install the Elasticsearch component: First step related to Elasticsearch: Installing Elasticsearch Following the steps from the Help Center in the topic Deploying HCL Commerce on a Kubernetes cluster, see the section on “Deploy Elasticsearch.” kubectl create ns elastic helm repo add elastic https://helm.elastic.co helm repo update In the bundle for the HELM Charts, navigate to: hcl-commerce-helmchart/sample_values/elasticsearch-values.yaml, there is a sample file to use for your Elastisearch deployment helm install elasticsearch elastic/elasticsearch -n elastic -f localvalues.yaml Ensure that container is running: kubectl get pods -n elastic Second step related to Elasticsearch: Modifying the Commerce Helm charts Later in the Deploying HCL Commerce on a Kubernetes cluster, after completing other install procedures, when you will get to the Commerce Helm charts where you need to specify that you are deploying with Elasticsearch, note the following property/value pairs controls the deployment in the values.yaml file. package helm file property HCLCommerceDevOps hcl-commerce-helmchart/stable/hcl-commerce searchEngine: elastic After modifying the Helm charts you will proceed to install the shared commerce components for HCL Commerce, for more information on the what is shared refer to the HCL Commerce production environment overview As you can see the the enablement of HCL Commerce Search with Elasticsearch is straight forward. There is a few extra containers that come to play to make the entire solution work refer to Using the V9.1 HCL Commerce Search service: Ingest for loading data Query for getting data NiFi for visualizing, modifying and controlling the pipeline of services that are used to load the data and make it consumable by Elasticsearch Elasticsearch for storing the data and the core search engine Kafka/Zookeeper for storing and sharing the configuration Using the HCL Commerce with Elasticsearch will require some new terminology, and new practices. The Journey to New HCL Commerce V9.1 Search together with the part 1 of this article will provide you with some high level understanding of the new components in play. Stay tuned for part 2 of this post on “Meet HCL Commerce 9.1 Search” series where I will discuss how to turn on debugging in the new containers.Deploying HCL Commerce v9.1.1 Utility container to K8S2020-08-27T00:00:00+00:002020-08-27T00:00:00+00:00http://www.gunfus.com/deploy-ts-utils<p>In today post I will describe the usage and deployment of the HCL Commerce v9.1.1 ts-util container in kubernetes.</p>
<p>The ts-util is the container where HCL Commerce has placed the scripts to support the activities related to command lines utilities. HCL Commerce uses several utilities that run under the command line. This activities use non-rest API interfaces to load data into the database(dataload), encrypt passwords(wcs_encrypt), extract data from the database(dataextract), load policies mappings(acpolicies), and many other activities.</p>
<p>One of the first activities you will be faced with when deploying HCL Commerce is encrypting the password for the spiuser. For this activity you will need to launch the ts-util container and use the wcs_encrypt.</p>
<h1 id="deploying-the-ts-utils-in-kubernetes">Deploying the ts-utils in Kubernetes</h1>
<p>To deploy the ts-util container in a K8S environment is very simple:</p>
<ol>
<li>Download <a href="/../assets/2020/hcl_commerce/ts-utils.yaml">ts-util.yaml</a></li>
<li>Edit the file, specifcally the <strong><em>image</em></strong> variable, and use your docker image repository path</li>
<li><code class="language-plaintext highlighter-rouge">kubectl apply -f ts-utils.yaml</code></li>
<li>wait until the container is running</li>
<li><code class="language-plaintext highlighter-rouge">kubectl exec -it ts-utls-5656599bdf-9khjb bash</code></li>
<li><code class="language-plaintext highlighter-rouge">cd /opt/WebSphere/CommerceServer90/bin</code></li>
<li><code class="language-plaintext highlighter-rouge">./wcs_encrypt.sh aGo0d.sTRong.pAsw8rd!</code></li>
</ol>Angel Veragunfus@gmail.comIn today post I will describe the usage and deployment of the HCL Commerce v9.1.1 ts-util container in kubernetes.