Exporting Form Data as PDFs

I’ve come across a few solutions to the problem of how to take form data, from a variety of sources and turn them into flat files. This is quite often needed in migrations out of case recording systems. Forms may be used for assessments, referrals, and other processes, and the number of templates used tends to be pretty high, so the work involved in transferring answers cleanly into an equivalent in the new system is far too high to be considered seriously (remember the fourth rule in the Practical Data Migration methodology!). 99.999% of it is usually unneeded by reporting but needs to be easily accessible by the practitioner, so what’s needed is a general purpose process that can take all the questions and answers and apply a standard format to them so that they are readable and easy on the eye.

Most of the methods I’ve seen in use seem to be slow (using the supplier’s own export feature, for example, I’ve seen one project creak on for months) or expensive (buying third party software) or both. Luckily, our old friend Talend can be used for this sort of job with a little bit of cunning. The approach taken is to embed the form questions/answers in html tags, which are pretty easy to use and format, and then pass the result into a java library that is capable of taking HTML and converting it to PDF. I’m sharing it online because I think it’s something that could be helpful to other LAs, and maybe save them a bit of cash. God knows, there’s little enough of it about in these austere times. Talend is free if you only use one developer (there is a paid version for more complex projects too), so it’s a cost-effective option. If you don’t have the skills in house and need someone to plumb it in for you, drop me a line to discuss terms.

In order to follow this tutorial you’ll need a few prerequisites:

  • Reasonably sound understanding of Talend
  • Decent SQL skills, probably including the ability to write functions in T-SQL, PL/SQL or whatever is your tipple of choice.
  • Ability to construct a reasonably sound html document.
  • No fear of Java. You don’t really need to know much about it, just not freak out if you see it.

OK? Still here? Shall we have a go?

Get hold of the Java Libraries

Download zip file of all jar files from latest version of openhtmltopdf here

Unzip the archive to a folder

Use tlibraryload to load all of the jar files one after the other. You’ll probably find that one of the jars – commons-logging – already exists in Talend so you won’t need that one but the others all need loading like this:

doc

Then you’ll need to build a workflow that follows this sort of pattern (I’ll list them in the order shown in the screenshot and with the names they have there, so you can relate the narrative to the flow, but obviously the names themselves don’t really matter so much)

Make a Workflow

4

list document ids

An Input listing all the document IDs. In my case, that’s an MS SQL Server input containing a query that has just two columns: The document ID and a text string which I can use as name, but in reality you’ll actually probaby want some more metadata such as date of creation, who created it and so on as most load scripts will want this kind of info for each file they load.

take each in turn

A flow-to-iterate component. In other words, it will take all the rows handed to it by the previous component and pass each one in turn to the following stage so it can deal with each one, one at a time.

select rows as html

This is going to be by far the hardest component to build since it is effectively trying to build an html document out of the various parts of the form, which will be complete and valid XHTML (very important that – it has to be XHTML because the java will want it to be vaid XML as well as decently formatted HTML)

Let’s say you have a super-basic structure where each form has several sections, each setion has sevral questions and each question may or may not have an answer, which can either be a string or a date, you’d have something like this

“select

f.id

s.id page_order,

q.id question_order,

‘<span> style=\”color: #00006B;font-weight: bold;padding: 5px;\” >’ +

dbo.dm_xml_cleanse(the_question)+'</span> as opener,

coalesce(a.the_text,convert(varchar(32),thedate,103) as stringans,

‘<br/>’ closer

from forms f

inner join sections s on s.form_id=f.id

on f.id=”+((String)globalMap.get(“form_id”))+”

inner join questions q on q.section_id=s.id

left join answers a on a.question_id=q.id “

In this code, form_id is just the name of the column in which the unique identifier of the form is held. That globalMap.get() function just retrieves it in order to limit the output to only the sections/questions/answers relevant to this specific instance of form it’s looking at right now.

Note that each has a section and a question order (which I am assuming to be _in_ order – ie, sorting on these two will get the questions into the order they appear in the front end). The text output is in three parts too: a beginning, a middle and an end. It doesn’t have to be like that, I just find it easier if I have the opening tag in one, the text in teh second and the closing tag in the third.

Obviously the schema for this query will need to give all those text strings plenty of elbow-room, otherwise the text will be truncated and nobody wants that.

dbo.dm_xml_cleanse is a t-sql function whose job is just to replace dodgy xml entities with better equivalents, and replace line breaks with <br/> tags. Obviously if the text already has markup within it you’ll need to deal with that first.

Of course, these tags on their own arent enough: you’ll also need to union it with other sections which will include things like a header section (section and question ID will both be set at 0 so it appears at the top) and footer (ditto but set at 100000000).

The header should have something like this in its text output

‘<!DOCTYPE html PUBLIC \”-//W3C//DTD XHTML 1.0 Transitional//EN\”

\”http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd\“>

<html xmlns=\”http://www.w3.org/1999/xhtml\“>

<head>

<title>Migrated Form</title>

<meta http-equiv=\”Content-Type\” content=\”text/html; charset=UTF-8\” />

<style>

<!–insert stylesheet data here –>

</style>

</head>

<body> ‘

and you can add headings, divisions and other bits by unioning other queries in there: for example, a query on only form and section with a 0 in the question_order field would give you a row at the top of each section which you could use for a header, for example. You get the idea. Just stick to standard meat-and-potatoes html tags with some css to make it look more readable.

You can use images like this

<img src=\”file:///E:/mylocalfolder/pdf_logo.png\” style=\”float: right;\” ></img>

For some reason, I couldn’t get this to work if the image was on a network though, only a local drive.

You could also look at odd css tricks that I guess were developed for printable pages and are not commonly used in standard web format. For me, the most useful was

page-break-before: always;

…which I used in the style tag of the division surrounding each new section in order to break it up better and stop a lot of ugly splits in mid-question

join three parts together

All this is doing is reuniting the three text fields in each row into one single field

sort in order

Sort on the section_order and question_order fields to get all the rows in the correct order with the header at the top, the footer at the bottom and so on

aggregate all rows

Uses a “list” aggregate function to amalgamate all the text fields in all the rows into one huge text string

Set Global Variable

Assigns that text string to a single (and very large!) variable that can then be re-used in various places. Obviously make sure the list separator is a space, not any other character otherwise the other characters will bollox up the xml.

makefiles

A tJava component where the file is actually created. There’s a small amount  of Java here that does the work.

Create the File

First of all, on the imports tab, you need to write

//import java.util.List;

import java.io.FileOutputStream;

import java.io.OutputStream;

import com.openhtmltopdf.pdfboxout.PdfRendererBuilder;

import java.io.File;

Then on the code tab, this code, which will actually write a pdf file contining a pdf version of your form.

OutputStream os = new FileOutputStream(“E:\\myfilepath\\”+((String)globalMap.get(“filename”))+”.pdf”);

PdfRendererBuilder builder = new PdfRendererBuilder();  builder.withHtmlContent(((String)globalMap.get(“theHTML”)),”http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd“);

builder.toStream(os);

builder.run();

“filename” is the column in the “list document ids” component that contains the name of the document to be created. Obviously this should be unique so either the file path or the name should probably contain the unique identifier for the form so that there’s no risk that one form overwrites another.

“theHTML” is the name of the huge text variable created in the “Set Global Variable” component. If the htnl isn’t valid xml, it’ll explode so while you’re in test mode, there’s an extra bit of code that can be used to write the HTML into a separate file before converting it into pdf, so it’s best to paste this ABOVE the previous block

//THIS 1ST SECTION JUST WRITES AN HTML FILE SO YOU CAN VALIDATE THE HTML IF YOU GET A PARSE ERROR

FileOutputStream fwr = null;

File file;

file = new File(“E:\\pdf_exports\\asm\\”+((String)globalMap.get(“filename”))+”.htm”);

fwr = new FileOutputStream(file);

if (!file.exists()) {

file.createNewFile();

}

byte[] toBytes = ((String)globalMap.get(“theHTML”)).getBytes();

fwr.write(toBytes);

fwr.flush();

fwr.close();

If you get an error from the tJava Component, you should be able to find the file that was created just before the error. You can upload it to the W3C’s XML validator which will tell you what is wrong with the XHTML you constructed and you can keep tweaking till it goes through.

Taking it Further

As you can see, this is a basic outline. To turn it into a complete working process you’ll probably want a bit more such as an output that takes the metadata of each file created and stores it in a table (along with the file path) subject to confirming that the file ws created successfully, so you can track what you’ve captured so far. Subsequent loads will then check that table, disregard any that have already been captured and not subsequently updated, and then start creating new files based on what’s left. If you have a gazillion forms they most likely won’t all fit into a single folder so you’ll also need to create a function to construct a path to the folder each belongs in. This is mildly annoying because I found I couldn’t get Java to create the folders in a nice, easy way so I ended up having to add, just before the tJava component, a tFileTouch component to make a temporary file (this WILL create the folder tree automatically) and then a tFileDelete that got rid of the temp file, leaving just the folder behind it so that the Java can drop the file in the right place.

Building a List in Talend to Use as a Filter in Another Component

This is an answer to a question about Talend that was posted on StackOverflow. I wasn’t able to post as many pictures as I needed due to house rules in place, so I have moved the whole thing here and linked back to it. The general gist of the question was that there was a mahoosive table that would have eaten too much memory if its entire contents were dragged into Talend, so the user wanted to be able to take some values from a different table, string them out into a list, pop them into a context variable and then squirt them back into another SQl query so that he ended up with a smaller set of results to work with. With me so far? OK, read on!

Hi

This should be possible. I’m not working in MySQL but I have something roughly equivalent here that I think you should be able to adapt to your needs.

As you can see, I’ve got some data coming out of the table and getting filtered by tFilterRow_1 to only show the rows I’m interested in.

main

The next step is to limit it to just the field I want to use in the variable. I’ve used tMap_3 rather than a tFilterColumns because the field I’m using is a string and I wanted to be able to concatenate single quotes around it but if you’re using an integer you might not need to do that. And of course if you have a lot of repetition you might also want to get a tUniqueRows in there as well to save a lot of unnecessary repetition

xmap1

The next step is the one that does the magic. I’ve got a list like this:
‘A1’
‘A2’
‘B1’
‘B2’
etc, and I want to turn it into ‘A1′,’A2′,’B1′,’B2’ so I can slot it into my where clause. For this, I’ve used tAggregateRow_1, selecting “list” as the aggregate function to use.

aggreg

Next up, we want to take this list and put it into a context variable (I’ve already created the context variable in the metadata – you know how to do that, right? If not, here’s a quick rundown). Use another tMap component, feeding into a tContextLoad widget. tContextLoad always has two columns in its schema, so map the output of the tAggregateRows to the “value” column and enter the name of the variable in the “key”. In this example, my context variable is called MyList

xmap2

Now your list is loaded as a text string and stored in the context variable ready for retrieval. So open up a new input and embed the variable in the sql code like this

“SELECT distinct MY_COLUMN
from MY_SECOND_TABLE where the_selected_row in (“+
context.MyList+”)”

It should be as easy as that, and when I whipped it up it worked first time, but let me know if you have any trouble and I’ll see what I can do.

Merging TIFFs into PDFs Using Only Free Software

I had a tricky problem a while ago and nobody seemed to know how to do it so when I worked it out, I thought it might be fun to post a how-to here for other people to crib from and take the credit. Wait, is this such a great idea? Oh well, never mind, here goes…

The challenge is to take a group of scanned pages from a document management system and prepare them for migration into Servelec Corelogic’s Frameworki/Mosaic product. The documents are scanned on a page-by-page basis as TIFFs, and the objective is to merge the pages into a single file, either as TIFFs or as PDFs in a new folder, with the paths held in a database table. In this example, I’ve used nConvert, which is largely free, although if you use it commercially you should buy a license. There’s another free program that I believe can do the same job, although I haven’t specifically tried it – namely Irfanview.

The general strategy is:

  • List the where they’re stored in the file system or EDRMS
  • Use t-sql or pl/sql to write a command line function to grioup all the individual files (pages) together and merge them into a single file in the file system
  • Pass the location of the new file to the import process.

Starting in Talend Open Studio, the first step is to create as new job using the tFileList component as the starting point, to get a list of files in the folder you’re interested in.

1

Use an iterator to connect to the next step- a tFileProperties component, which you can use to get the file properties of each file in turn. Check the image below for the format to use. You can use this to store the details of all the files in a table called – in this example – FILE_SILESYSTEM.

2

To move to the next stage, I’ve used a T-SQL function to create a shell-command that  does two things: first, create a new folder for the files to live in, and second to invoke a third party app called  nConvert to merge the pages into a single file. In the command below, you can see the “md” command being used to create the folder. nConvert- a third party app – can then be called to either merge the files or to merge and conver them to pdfs.

cmd /c cd C:/test/smart_files/ &
md ID &
cd ID &
md 64398 &
nconvert -multi -out tiff -c 5 -o C:/test/smart_files/ID/64398/164994_v1.tif U:/00707000/00706853.tif U:/00707000/00706854.tif U:/00707000/00706855.tif U:/00707000/00706856.tif U:/00707000/00706857.tif U:/00707000/00706858.tif U:/00707000/00706859.tif U:/00707000/00706860.tif U:/00707000/00706861.tif U:/00707000/00706862.tif U:/00707000/00706863.tif U:/00707000/00706864.tif U:/00707000/00706865.tif U:/00707000/00706866.tif U:/00707000/00706867.tif U:/00707000/00706868.tif U:/00707000/00706869.tif U:/00707000/00706870.tif U:/00707000/00706871.tif U:/00707000/00706872.tif U:/00707000/00706873.tif U:/00707000/00706874.tif >>C:/test/output.txt

In the example above, I’m just merging them but it’s simple to merge them as a pdf by just chainging the format to

-out pdf

…and…

C:/test/smart_files/ID/64398/164994_v1.pdf

The content of the table can then be split in two; first, the bult of the table can be passed to the import process. The last column – containing the output of the T-SQL function is stored in the final column of a table and the output passed to a shell command using a tMap component:

4

into an iterator….

5

The iterator then passes the output of the function into a shell command and merges the files into a single file in the specified folder.

You now have a list of merged files in a format the import process can understand and a folder containing the merged files, all stored in the place in which the import process expects to find them. It should be straightforward to simply run the load procedure and scoop up the merged file into Mosaic.

Christmas Gift Ideas

It’s that time of year again, and project managers up and down the country are wondering what to put in their team members’ stockings. Well, have no fear, here’s my must-have gift-giving guide for the Data Guru who has everything.

1. A Better Computer

Is your data migration lead’s brow furrowed? Does he spend hours staring at his screen clenching and unclenching his fists as the record count ticks from 100 to 200 on a 10,000,000 record load? This might be a sign that the refurbished Pentium III laptop or the virtual box accessed through a dumb terminal that you thought would be so much more cost-effective than a new Dell wasn’t such a good choice after all.

As data flies in and out of it, headed for the target database, both of the test machine’s kilobytes fill up immediately and it starts furiously swapping to keep up. The lights dim, the smell of burning fills the air, development computer fails to respond to mouse-clicks, the screen fades to grey. This is when that lovely, christmassy scarlet colour can be seen in the cheeks of your colleague.

Why not log in to the purchasing portal and order a better computer? What it costs you will be more than made up in fees as work gets done more quickly and doesn’t spill over into extra days and evening work.

2. Talend Data Integration Suite

OK, so Open Studio is the best £0.00 you’ve ever spent, but there’s a whole other level of greatness!

3. The Force Awakens Tie-in Poster

The power of the Force (AKA The Disney Corp) has reached into the world of data migration, producing a system even more powerful than PDMv2, and now you can buy inspirational posters based on the movie script to help motivate your data migration lead to fight the power of the dark side.Yoda

4. Another Spreadsheet

This one is a perennial favourite, and ultimately what most data migration professionals are given every year. We’ve all seen this heart-warming yuletide scene: Late December, a few scant weeks before go-live, and the project team are pulling on their coats, ready to go down the pub for their Christmas do. As if suddenly remembering something, one of the BAs turns and says

“Oh by the way, I’ve just emailed you a spreadsheet the business have told me about. It has mission-critical data on it and they absolutely can’t go live without it. Merry Christmas!”

…and with that they are all gone, leaving the vision of a slowly turning egg-timer reflected in the tears of – one assumes – pure joy, streaming down the data migrator’s face.

Happy Christmas…. And remember, we’re making a list, we’re checking it twice….

Indiana Jones and the Last Spreadsheet

We recently introduced our ten-year-old daughter to the Indiana Jones movies. Even the fourth one, but let’s not talk about that. At the very end of the original Raiders of the Lost Ark, there’s a scene in which the Ark of the Covenant is boxed up and placed in a warehouse surrounded by what looks like tens of thousands of identical boxes. The modern equivalent of its desert resting place is not an underground tomb, guarded by snakes and poison darts, but a total immersion in endless, bureaucratic sameness. Government_WarehouseNow, I don’t know if you’ve ever tried to implement a “Spreadsheet Amnesty” but if you have, you’ll know it is exactly like that. Ex. Act. Ly.

For the uninitiated, a spreadsheet amnesty is an essential part of any data migration project. Essentially, the problem you have is that the old system you’re setting out to replace is total rubbish. After all, if it wasn’t, you wouldn’t be there. the staff are clever people with a job to do, and they can’t be hampered by this bad system, so they’ve created all manner of spreadsheets, access databases, and who-knows-what to record all their day-to-day working data in. If you ignore it, you’ll be starting out with an incomplete system, missing key data. So, you have to ask everyone to identify their data sources so you can scoop them all in and use them to plug gaps in the main database. Sometimes, you might come across political impediments; in all likelihood, internal business teams have been waging a war of attrition against these spreadsheets for years, and I’ve heard of cases where, when they get onto the project board there’s been a desire to exert some influence to “punish” the offenders by ruling these contraband data sources as invalid and out of scope. If you go down that route, you are condemning the project to repeat the mistakes of the past. Hence the name “amnesty”. No blame. Everyone is welcome, and so is their data.

But politics aside, when you go out and trawl through the spreadsheets, it can be like searching for a needle in a large stack of identical needles. Or, if you prefer, a sacred relic in a warehouse full of fake sacred relics. The trouble is, there are often hundreds of spreadsheets which often seem to be capturing slightly different view of the same data, and a lot of the time, it’s hard to pick out the ones that have unique data being used to drive a specific business process, not just a mish-mash of items drawn from other places. I thought it would be helpful to list out a few questions that are worth asking when deciding which box to pry open in your search for the ark.

So you visit the team room, and ask them to show you the spreadsheets they’re using day-to-day in addition to recording in the case recording system, which for the sake of argument, I’ll refer to as “Disappointech”. As each one is produced you ask:

Is this a report from Disappointech?

Is the answer is yes, it’s not a data migration source. Whatever is in it must be in the core system already. As an aside though, if this is used as part of business-as-usual, you need to make sure the business analysts who are configuring the software are aware of it, and that the new system will produce an equivalent report.

Is there anything here that’s not in Disappointech?

If the answer is no, the team might just have been compiling or copy/pasting the information into a spreadsheet for ease of use. Again, there’s nothing here that you need to migrate but it’s worth thinking how the new system can save them having to do such a tedious, time-consuming chore. Maybe a report could help?

Is it information you need to have in the new system?

OK, so we’ve established that this is unique information that can’t be found in Disappointech and only exists in the spreadsheet. But does it belong in the new system? If it’s about a social worker’s annual leave, say, or contributions to the coffee fund, or the staff Eurovision sweepstake then the answer is probably no and you can move on. More likely, it’s something borderline: information that’s pretty close to the type of data you need to bring in but not quite in scope, and you might have to get a project decision on which side of the fence it lies on so you don’t get into “scope creep”.

Is the data about a specific person? 

If the data is aggregated – for example, if it shows – such-and-such a percentage of visits done on time in each month, say, but you can’t identify specific dates for a specific service-user in there then this is not likely to be usable. Most social care systems have to associate each item with a specific client record and it isn’t possible to go from a statistic back to the source data that went into producing the statistic.

What’s the quality like?

Having established that there’s something here that the project needs to have, you’ll need to assess whether it’s tidy enough to import. Is each service user’s Disappointech ID present and correct? Is each item on its own row? Do columns with dates in all have real dates or do some of them have things like “To Be Confirmed” or the dreaded “N/K”? Do the team store all the data in this one spreadsheet or do they have multiple copies, say one for each financial year?

If the quality is low, you’ve got a few choices. you could improvise and write some fairly complicated code that tries to work around the problems. That’s a high-risk strategy because it’s quite likely to need a lot of maintenance as new rows get added to the sheet, introducing new problems. A second option is to ask them to transfer the data to a new template which you can set up with lots of validation so that it has to be filled in in a certain way. Lastly, it’s always worth considering whether all this pain is necessary and whether – if the list is fairly short – they could manually transfer the data by keying it in on day 1, perhaps with a bit of help from someone who knows the new software well. It’s surprising how often this is the best solution for everyone.

indiana-jones-boulder-oOK, I think that covers Excel spreadsheet. I was going to write about how to cope with poorly designed Access Databases as well, but instead, here is a visual metaphor to describe what that feels like: