The IBM Notes Client is NOT GDPR compliant
A essential part of GDPR is the personal data. Protecting it, managing the rights to the data.
To know if a data breach has taken place, you need to have an audit trail and you need to know where the data is.
If you are using pure web based Domino solutions only (no DIIOP etc) you can control the data, what databases are replicating between servers only etc.
You can have a log of all HTTP actions.. it may not be a handy log, but a log never the less.
The moment you included the IBM Notes Client things change...
The user can make a copy or a replica of a 50GB database and you can not do much about it ,
The cat is out the box, and you can no longer control what happens to the data.
We all know the annoying (and wonderful) feature that a user can mark thousands of documents and just copy them.
Then the user can paste them many different places, A local database, same original database (very annoying),..
You no longer have the essential control of the data.
But wait you say! there is a setting in the ACL to allow "Replica or copy documents" !
That sounds sweet ...and I am sure IBM meant well, but this one also prevents users from doing a copying of text from a document, which will prevent any user from doing their daily job.
So it is a "all or nothing" solution. and I can guarantee you that this one is ticked off for ALL users on any database, just to get any work done,
There is not any real logging going on. You can make all sorts of hacks to log things, but it is to easy to go "under the radar" and do things without it being logged.
(Yes there are 3rd party companies who will try and fix his, but that is not good enough, a log/audit should be available for any IBM Notes/Domino database from IBM)
No matter how you twist it.. applications using IBM Notes today are NOT GDPR complaint
The simple fixes - my suggestions
It is all fine and dandy with a V10 coming out later this year with new stuff, but this needs fixing NOW, since GDPR deadline is 25th May 2018.
This is my suggestion to fix this:
In the ACL on the database add these options instead
I would probably also consider splitting replica and copy permission into two separate entries
Maybe also creating an entry for copy text etc. is needed, I don't know if this will ever be used.
Logging, selecting to a text file or a notes database, with same name as database and just having a separate file extension
This should all be very easy to do and could be in a fix soon...if IBM/HCL are willing to.
IBM/HCL ....please make the IBM Notes Client GDPR compliant !
"The trust relationship between this workstation and the primary domain failed" using Synology as Active Directory server
I am using my Synology server as my Active Directory server.
Synology uses Samba as the Active Directory server and it has been working great so far, ..well until today.
Suddenly I could not login and got the error "The trust relationship between this workstation and the primary domain failed".
I logged into the Synology server with another PC and figured the issue most likely was related to Domain Policies.
And yes.... if you have policy with "Maximum password age" AND an account with "Password never expires" you have a conflicting situation, when the password expires.
So make sure "Maximum password age" is not ticked off if you want to have an account which never expires
Ideally Synology should have warned about this conflicting situation when saving, but ...
Tutorial: Moving a Centos physical server with Soft RAID to be a KVM virtual machine guest using virt-p2v and virt-v2v
I have used KVM for several years on Centos as a platform for running virtualized servers, and my experiences have been very good
Now I have the case where I want to move a physical Centos server running RAID 10 on soft RAID, to a Centos KVM server.
There are several ways to create a VM guest from physical server however it is a complex process and plenty of things can go wrong on the way.
The safest way to do it is to use virt-v2v and virt-p2v.
These are Red Hat created tools but can be downloaded even if you are not a Red Hat user.
The (source) physical server will be running virt-p2v (physical to virtual) and the conversion server (in this case also the target server) will do the conversion to be a KVM guest
Source - the physical server:
The source is the physical server.
- To handle to the transition to be a KVM virtual machine guest, you need to shutdown the running server.
- The physical server will boot up on virt-p2v
Read about virt-p2v here: http://libguestfs.org/virt-p2v.1.html
- Download virt-p2v ISO image here: http://oirase.annexia.org/virt-p2v/
- Create a bootable device from the ISO image. I use Rufus to do this (https://rufus.akeo.ie/)
- Have the server up in virt-p2v from the bootable device just created
- You will be sending a LOT of data over the network, so a fast network is good and make sure the server can "see" each other (firewall, network segments..)
1) enter the IP of the conversion server
2) enter user name on the conversion server (root)
3) password of user on conversion server.
4) test that you can connect to conversion server.
When OK press "Next" you will see the next and last setup screen
I will show what to fill in.
1) First give the VM a name
2) This is the number of virtual CPUs
3) Amount of RAM
(You can always change the numbers after the VM has been created)
1) There are several options here
Since I want to create a VM guest on the conversion server I select "libvirt"
2) I leave this empty since I want the process to create a new VM with the name "New Centos Server"
3) Leave blank
4) The format for the VM. The RAW and QCOW2 format are supported. RAW is fastest, but QCOW2 has many more features like snapshots, sparse ..
I choose QCOW2
5) Choose between "Sparse" and "Preallocated". Using Sparse the disk will expand as needed, using less space to start with. However preallocating all the space for disks will speed up write times dramatically.
So if you are using something write heavy use Preallocated.
I choose "sparse" here in this demo.
Virt-p2v are a bit smart an will investigate the source disks and only send real data on disks, not deleted files, empty spaces etc.
Choose which disk to move to VM. I use soft RAID here so it is important to get all disks moved to new server.
Don't move the boot device over, unselect here
Select the network cards to move over
Since KVM may not have the drivers for the physical devices the conversion process will going in an investigate the installtion and substitute with KVM available drivers which means in most
cases Virtio drivers since they will be the fastest drivers. Virtio drivers are Paravirtualized drivers, which gives near "bear metal" performance.
Linux will have the virtio drivers installed already, but if guest is MS Windows you need to download the virtio drivers and install them. You will also need to install libguestfs-winsupport.
Target - the KVM guest
To handle the conversion on the target Centos server you need to install virt-v2v.
Prerequsits: KVM/QEMU is already installed on target server
- yum install virt-v2v
To administer the VM guest install virt-manager
- yum install virt-manager
Running the conversion
Go back to the physical server and click on the "Start conversion" button
virt-p2p will report as it goes a long. I will start with the conversion and then moved the disk over.
Any other status than 0 means there was an issue
Open the Virt Manager and voila ! :-)
The server is now running on the target KVM server
Devices on the KVM guest
The is example has been an Centos physical server, but other OS is supported too:
Red Hat Enterprise Linux 3.9
Red Hat Enterprise Linux 4
Red Hat Enterprise Linux 5
Red Hat Enterprise Linux 6
Red Hat Enterprise Linux 7.1 and later
Windows Server 2003
Windows Server 2008
Windows Server 2008 R2
Windows Server 2012
Windows Server 2012 R2
virt-v2v can of course also move VM guests from other VM platforms:
Red Hat Enterprise Linux 5 Xen
VMware vSphere ESX / ESX(i) - versions 3.5, 4.0, 4.1, 5.0, 5.1, 5.5, 6.0
IBM Domino and DIIOP - not quit doing what it is supposed to.
I am using DIIOP to have remote session with a Domino server, because using Java agent just gave to many issues with some external libraries.
While DIIOP may not be the fastest way to work with data on a IBM Domino server, it usually does the job...well sort of..
I've just found out when using richtextitem.embedobject the name parameter of the file does not work, so the attachment gets this reeeeaaally long name ...including the file path.
Also I needed to create a text file on the server so why not use the Stream class? Unfortunately I got a lot of errors until I found out the Stream was actually not trying to write the file on the Domino server, but my local PC??
If I wanted to write to my local file system I would probably not use a REMOTE DIIOP session! ..capisce IBM?
Maven - how to get dependency JAR files in build too
Maven may be smart to some ....but it is also a bloody XML nightmare, where many things can go wrong ...(or maybe it just me being stupid)
Just a reminder to myself....when I want to have dependency JAR files in build too , add this to pom.xml file
<!-- <classpathPrefix>lib</classpathPrefix> -->
<!-- <mainClass>test.org.Cliente</mainClass> -->
IBM Domino, a very annoying performance issue now SOLVED
For the past years a customer of mine has had IBM Domino performance issues with a Domino Server.
The issue concerned was really felt when working with attachments.
The company has small offices around the world, so we use SmartUpgrade (which in general works well) to manage Notes Feature Pack upgrades.
We attach the Feature Pack as an attachment in the SmartUpgrade database and use a Policy to push it to users.
Normally this is very fast, but for one server it would maybe take 5 hours to download the file.
The download would start at a decent speed and then only get slower and slower and in the end literally only move a few bytes at the time.
We tried "everything" ...even moving to newer faster hardware did not make any change.
The server is a Windows 2012 R2 server with Domino 9.0.1. FP8 in a Domino cluster. The other server is Linux server which did not have the issue.
The relevant difference between the two servers Domino in the Notes.ini configuration, was the setting for TCP.
The customer use encryption and compression on the Notes TCP connection
The Windows server had: TCPIP=TCP,0,15,0,,45088
The Linux server had: TCPIP=TCP, 0, 15, 0,,32800
If looked at the documentation for TCP setting it says:
|The TCPIP port line can contain up to six arguments as described below, with the first position numbered as position 0.|
argv Driver name
argv Adapter number (unused)
argv Requested number of sessions (unused)
argv Data buffer size to use. If the value is 0, the default size is used. Default sizes are different for different port drivers, as follows:
argv Number of network buffers to preallocate (unused)
argv Port flags, as follows:
0 X 8000 Encryption is enabled
0 X 0020 Compression is requested
Since we use Compression and Encryption of the connection we should have 8020 in hexadecimal for last parameter, which in decimal is 32800.
That is value we had on the fast Linux server. After changing the value on the Windows server to
TCPIP=TCP, 0, 15, 0,,32800
the network was much faster and SmartUpgrade became just as fast as on the Linux server :-)
The big question is why does the Domino installer suggest TCPIP=TCP,0,15,0,,45088 ??
and what are the undocumented Port flags used in this scenario?