Writings mostly about Lotus Notes/Domino...by me :
Jesper Kiaer,Espergærde, Denmark

Looking for a Notes/Domino developer? I'm available

RSS 2.0 Feed
Bookmark and Share
How to print a PDF file from Java
A customer of mine have a solution which creates Invoices in a PDF format.

They want a solution where for example twenty PDF Invoices are created, but they only want to print them not work with the PDF files themselves.

So a solution could be to create the PDF files and then send them to a printer in Java.

However since the PDF format is complicated stuff, so you really want use something that works no matter how complex the PDF file is.

That really rules out a lot of possibilities. There are java PDF librairies out there like Apache PDFBOX

but my advice is to use the PDF viewer "Sumatra PDF" as the solution. Forget about Adobe Reader if you are only viewing files, it is very bloated, go for the Sumatra PDF viewer.

I have used it for many years and it is really excellent.

If your users don't have Sumatra PDF installed, just use the Portable version instead.

To print is really easy via command line options :

This will print to the default printer.

String[] params = new String [3];
params[0] = "C:\\Program Files (x86)\\SumatraPDF\\SumatraPDF.exe";
params[1] = "-print-to-default";
params[2] = "c:\\test\\test.pdf";

If you want a named printer use

params[1] = "-print-to <printer-name>"

A printer dialog

String[] params = new String [4];
params[0] = "C:\\Program Files (x86)\\SumatraPDF\\SumatraPDF.exe";
params[1] ="-print-dialog";
params[3] = "c:\\test\\test.pdf";

See other command line options here https://github.com/sumatrapdfreader/sumatrapdf/wiki/Command-line-arguments

Published by: Jesper B. Kiær at 28-04-2015 13:10:00 Full Post

IBM Notes FP3 issue with error "LS2J Error With "Java Constructor Failed To Execute" and Eclipse Update Site can no longer import
Got this strange error today trying to import newest XPages Extension Library to Update Site: "LS2J Error With "Java Constructor Failed To Execute"

It seems related to FP3 only.


Published by: Jesper B. Kiær at 12-03-2015 15:00:06 Full Post

Lenovo Superfish malware opens big hole for hackers ...beware anti-virus program Avast uses the same trick !
UPDATED with Windows Certificate removal of AVAST trusted Root Certificate

As many have reported today Lenovo has installed a malware program called Superfish on it is products for some time.

Not to repeat what others have written about it, so get a quick update at::


In short it means that the Supefish malware can read alle data in an encrypted HTTPS connection, which of course should be confidential.
That is one issue.

The big issue is that it is an entrance for hackers to listen to the secure data as well.

Today I looked at my anti-virus program AVAST on my PC, and it certainly looks at is uses the dirty same trick of installing a root certificate in your browsers.

I use Firefox as my browser and it is not hit by the Superfish issue (it is said), however when looking in the list of Root certificates for the browser I suddenly see

I did not add this Certificate...

This is the same trick as Superfish uses.

Avast adds the certificate so it can listen to all encrypted HTTPS traffic, and it may open up to hackers as in the Superfish case.

It is really bad behavior from a security company not even to ask before adding the certificate and by default enabling this feature.

An encrypted HTTPS connection should be an "end-to-end" secure connection, with only two parties having access to the data.
AVAST may use a unique certificate and intentions may be good, but it is still a "man in the middle" (attack) and AVAST has access to all confidential data going through the connection.

You should disable it!

How to disable the feature:

1. First disable the feature in Avast

- Start the Avast interface in the taskbar in Windows

Goto settings

Select Active Protection and then Web Shield -> Customize

Disable "Enable HTTPS Scanning"

2. Remove Root Certificates

You need to remove the Root Certificates


In menu select "Options"

Select "Advanced" -> "Certificates" -> "View Certificates"

Goto "Authorities" and scroll down to the Avast Root Certificate and click button "Delete .."

Accept to remove and click OK.

Internet Explorer

It is a little more complicated since you will need to start Internet Explorer with Administrator rights.

- Right click on the Internet Explorer icon and select "Run as administrator"

An Access User Control dialogbox pops up ,,click on "Yes" button

In Internet Explorer select Internet Options

Select "Content" -> "Certificates"

Goto tab "Trusted Root..." and find Avast Root certificate and click the "Remove" button

Google Chrome

I have not installed Google Chrome but the procedures are the same as in the other browsers

UPDATED with Windows Certificate removal of AVAST trusted Root Certificate

Click on windows startbutton and write "Manage Computer Certificates" and and click on it

Open "Trusted Root Certification Authories" and select the Avast certificate.

Right click and select remove.

Published by: Jesper B. Kiær at 20-02-2015 00:14:00 Full Post

Come on IBM! ...XPages is still running on the 8 year old version 6 of Java!

Come on IBM! ...XPages is still running on the 8 year old Java version 6!

That is a problem since libraries more often are being compiled to minimum version 7.

IBM, we all know Notes and Domino are the kids you never found love for..but still.. please fix it

It should not be that hard ....and a Java runtime version 8 would be nice.

Published by: Jesper B. Kiær at 23-11-2014 12:10:00 Full Post

Webmin, fix to "Error code: sec_error_invalid_key" SSL error
If you are using Webmin as a way of administrating your servers you may be getting the error "Error code: sec_error_invalid_key" when try to log into your servers.

If is because newer browsers no longer accepts certificates with short keys ( 1024 bits or less).

It is easy solved.

Get access to the Webmin interface by using an older version of a browser.,

You can do that by for example running a "Live CD" of Ubuntu, Linux Mint or what ever.

Log into Webmin and goto "Webmin Configuration":

Click on "SSL Encryption"

Goto "Self-Signed Certificate" and create a new certificate with a key of 2048 bits

Click "Create Now" and you done.

Log into webmin administrator again by accept the new certificate in the browser first

Published by: Jesper B. Kiær at 02-11-2014 12:51:17 Full Post

Quick guide on how to index data from IBM Domino databases in the Apache Solr search engine
Full Text indexing has always been a great feature of IBM Notes and Domino. In the old days it was rare to see other systems have Full Text Indexing and it was really a unique and useful feature of IBM Notes and Domino.

Unfortunately for IBM Notes and Domino two things changed that advantage.
- IBM really did not keep on improving the Full Text engine, a new engine arrived in release 5, but since then only minor upgrades have been done to the engine.
- New searchengine nappeared with Doug Cutting created the Java based Full Text engine Lucene. By 2001 it was part of Apache as Open Source and has grown ever since then.
It has since been the foundation or inspiration of most Full Text engines.

Apache Solr is an Enterprise Search Engine based on Lucene. It is free and Open Source, so why not have a look at it?.

At IBM Connect 2014 IBM announced that Solr will be used to index mail database in a later major release...what ever that means...

I will give you a quick tutorial to set it up and have it running on some of your Domino data fast.

What is Apache Solr?

From their website:

"SolrTM is the popular, blazing fast open source enterprise search platform from the Apache LuceneTM project. Its major features include powerful full-text search, hit highlighting, faceted search, near real-time indexing, dynamic clustering, database integration, rich document (e.g., Word, PDF) handling, and geospatial search. Solr is highly reliable, scalable and fault tolerant, providing distributed indexing, replication and load-balanced querying, automated failover and recovery, centralized configuration and more. .."

The goal of this quick tutorial.

- setting up a Solr server for testing
- have Solr index a Domino database
- query the data

1. The IBM Domino database to be indexed
The database to be index will be a simple web enabled database.

One form which has 3 fields

Subject - a Text field
Body - a Rich Text field
Attachments - a Rich Text field for attachments

2. Installation of Apache Solr
For this tutorial I will take the easy route.
This means downloading Apache Solr 4.10 and just use a modified version of the included example.
Start by downloading Solr here at http://lucene.apache.org/solr/.
Unpack the files files anywhere you want.
In this tutorial I will just run it on my desktop pc.

3. Preparing Solr for Domino data
The most important files in Solr are the files schema.xml and solrconfig.xml.


Solr can handle data as dynamic or static fields. We will define are few static fields

In the schema.xml file we will add

<field name="title" type="text_general" indexed="true" stored="true" multiValued="false"/>
<field name="subject" type="text_general" indexed="true" stored="true"/>
<field name="body" type="text_general" indexed="true" stored="true"/>
<field name="docurl" type="string" indexed="true" stored="true"/>
<field name="domino_doc_type" type="string" indexed="true" stored="true"/>

the field ID is very important, it is the key of the Solr document. We will use the UNID of the Domino as the ID in Solr documents

<field name="id" type="string" indexed="true" stored="true" required="true" multiValued="false" />

4. Getting data from IBM Domino into Solr
There are different ways of getting data into Solr.
You can use the Data Import Handler (DIH) which is part of Solr and import data from RDMS, XML, data from file systems or websites, You can use Apache Nutch, Apache Manifold and more.

What I will use is the Java API called SolrJ.

For Domino documents the steps are:

- Connect to the Solr server.
- for each Domino document create a Solr document and fill in the fields with data form the Domino document
- if the Domino document has attachments upload them to the server
- for every 1000 document commit the Solr documents to the server

I will be using DIIOP to get the data from the Domino database. This is not exactly the fastest way to get data but for the this purpose it is fine.

- Connect to server

public class SolrImporter {
Collection<SolrInputDocument> solrDocs;
HttpSolrServer solrServer;
String solrUrl = "";

public void init() {
solrServer = new HttpSolrServer(solrUrl);

try {
solrServer.deleteByQuery("*:*");// delete everything!
} catch (SolrServerException e) {
// TODO Auto-generated catch block
} catch (IOException e) {
// TODO Auto-generated catch block

In this demo every time I import data, ... all data in Solr is deleted first.
In real life you would of course update Solr documents by its ID.

- Creating the Solr documents:

In Solr data is saved in documents, which are much like Domino documents.

SolrInputDocument solrDoc = new SolrInputDocument();
solrDoc.addField("id", doc.getUniversalID(), 1.0f);
solrDoc.addField("subject", doc.getItemValueString("subject"), 1.0f);
solrDoc.addField("body", ((RichTextItem)doc.getFirstItem("Body")).getFormattedText(false, 0, 0), 1.0f);
solrDoc.addField("docurl", "http://jezzper.com/"+ db.getFilePath()+"/0/"+doc.getUniversalID());

//add to collection of docs to be submitted
boolean result = solrDocs.add(solrDoc);

Notice the last parameter, it gives you the possibility to "boost" the value (greater weight) in the search index or the opposite. A number larger than 1 boosts

For performance sake don't commit after every Solr document.
Here I commit the document for every 1000 documents to the Solr server

if ((i % 1000) == 0) {

- For attachments I will use another appraoch:
For each attachment, I will extract the attachment to the file system and then upload the file to the Solr server

while (e.hasMoreElements()) {
EmbeddedObject eo = (EmbeddedObject) e.nextElement();
if (eo.getType() == EmbeddedObject.EMBED_ATTACHMENT) {
try {
eo.extractFile("c:\\extracts\\" + eo.getSource());
} catch (Exception e2) {
try {
AttachmentImporter.Upload(solrServer,"c:\\extracts\\" + eo.getSource(), doc.getUniversalID() + "."+eo.getSource());
} catch (Exception e1) {

Solr uses Apache Tika which can detect and extract metadata and text content from various document types.

We will send the attachments to the Solr server which will detect the Content Type and automatically extract Meta data and content from the files.

The benefit of this is that it is very easy to do, but on the other hand you do not want to send a 2 GB AVI file to the server just to get the Meta data extracted.
In that case you might want to consider a solution where you extract the meta data yourself and only save these in a Solr document.

For ID we will use the Domino UNID +"."+attachment Internal name

public static void Upload(HttpSolrServer solrServer,String fileName,String id) throws IOException, SolrServerException {
ContentStreamUpdateRequest req = new ContentStreamUpdateRequest("/update/extract");
req.setParam("literal.id", id);
req.setParam("fmap.content", "text");
req.setParam("literal.attachment_name",fileName.substring(fileName.lastIndexOf('\\') + 1));

File attFile =new File(fileName);
// empty content type to let Tika itself find the content type

//new test parm
req.setParam("uprefix", "attr_");
req.setParam("fmap.content", "attr_content");
req.setAction(AbstractUpdateRequest.ACTION.COMMIT, true, true);

// workaround, this must be done to be able to delete tmp attachment file in Windows
System.out.println( "Is attachment deleted from tmp directory? " + attFile.delete());

5. The Solr administrator console
You can mange, analyze and query the Solr server from a browser at http://localhost:8983/solr/

Select the core called "Collection1" and you will get the menu to analyze and query the data

6. Query the data
The basic form of query in Solr is

Field:Seach Text

so Body:pdf means search for text pdf in the body field

The response result can be served in many formats JSON, XML, CSV etc. when querying the Solr server.
Another way is to use SolrJ again query and get the result and handle it in for example in an Xpage, or use the JSON output in Dojo.

You can get can the data from SolrJ into the Solr server as Java Beans, and also get the result back as Java Beans

An example of a document returned as JSON in a result :

{ "id": "4A4DF8CF83AFFC8FC1257D6000416FCB",
"subject": "Elvis Costello",
"body": "Elvis Costello was in Denmark and visited the Elvis Presly Graceland museum in \nRanders\n\nA picture of Elvis Costello ",
"docurl": "http://jezzper.com/jezzper/Solr.nsf/0/4A4DF8CF83AFFC8FC1257D6000416FCB",
"domino_doc_type": "document",
"_version_": 1480501411752968200

A example of all 7 documents returned (3 Domino documents and 4 attachments). :

Search *:* gives:

As XML: Search result.xmlSearch result.xml
As JSON Search result.JSONSearch result.JSON

This was just quick example of getting up and running to play with Domino data in Solr.
There is much more to Solr so I will probably do some more blogging on Solr over the next months, if I can find the time.

Domino data export to Solr Search Engine Source Code
Libraries needed for SolJ 4.10
Demo Domino database for Solr indexing

Published by: Jesper B. Kiær at 29-09-2014 00:45:00 Full Post