Thursday, December 13, 2012

How to create and deploy a TBO

BOF (Business Object Framework) provides an easy way of implementing custom business logic. The advantage of using BOFs over application customization is the custom business logic is always executed, regardless of the client program using Documentum (repository).
The first version of BOF, 1.0 kept all of the TBO information outside the repository: BOF registry (list of BOF modules) was kept in a file dbor.properties, and the implementation jars had to be deployed in the classpath of the client (libraries folder). The big lack of this approach is that each client has to be configured in order to use properly the BOFs.
The new version - 2.0 eliminated this lack by using a different approach: BOF registry, the modules and their implementation is stored in the repository, so there's no need to configure the clients, all of them will benefit of BOF functionality deployed in the repository.
There are 3 types of BOF modules: TBO (Typed-based Business Objects), SBO (Service-based Business Objects) and Aspects. TBOs are the most used and serve for modifying and extending the behavior of persistent repository object (custom) types.

In this article I'll focus on creating and deploying a TBO. The steps to follow are:
1. Create a custom type
2. Write TBO source code
3. Create Jar Definition artifacts
4. Create Java Library artifacts (optional)
5. Create Module artifact
6. Deploy the TBO

1. Create a custom type
The very first thing is to have a custom type which your TBO will map to. You can't map a TBO to a standard Documentum type. Your custom type must extend a persistent type (ie: dm_document, dm_folder, dm_user, etc.).

2. Write TBO source code
The first step is to write the Java code of the TBO, the operations that will be executed when the object of the mapped type is changed. Using Eclipse create a new Java Project, add the DFC libraries (and other required libraries) in the build path.
The code and classes structure depends on complexity, but for the TBO you need at least an Interface and an Implementation class, preferably in different packages. I will list here some trivial samples:

a) Interface IAttach:
package com.company.project.attach.tbo;

import com.documentum.fc.client.IDfBusinessObject;
import com.documentum.fc.client.IDfDocument;
import com.documentum.fc.common.DfException;
import com.documentum.fc.common.IDfDynamicInheritance;

public interface IAttach extends IDfBusinessObject, IDfDocument, IDfDynamicInheritance {
public void setDoctype(String doctype) throws DfException;
public String getDoctype() throws DfException;
}

The interface should extend 3 other interfaces:
- IDfBusinessObject - required for modules management
- IDfPersistentObject (usualy IDfDocument) methods defined for the base type you're extending: if your type is a subtype of dm_document, use IDfDocument, for dm_folder - IDfFolder, etc.
- IDfDynamicInheritance - enable some advanced module handling

As you can see the interface declares only 2 custom methods: setDoctype and getDoctype that set and get values for the custom attribute "prj_doctype".
Compile the interface and add pack it into a jar (attach.jar).

b) Implementation class Attach:
package com.company.project.attach.tbo.impl;

import com.company.project.attach.tbo.IAttach;
import com.documentum.fc.client.DfDocument;
import com.documentum.fc.common.DfException;
import com.documentum.fc.common.DfLogger;

public class Attach extends DfDocument implements IAttach {
public void doSave(boolean saveLock, String versionLabel, Object[] extendedArgs) throws DfException {
DfLogger.info(this, "doSave called for object with id: {0}", new String [] {getObjectId().toString()}, null);
setTitle(this.getDoctype());
super.doSave(saveLock, versionLabel, extendedArgs);
}

public void setDoctype(String doctype) throws DfException {
setString("prj_doctype", doctype);
}

public String getDoctype() throws DfException {
return getString("prj_doctype");
}
@Override
public String getVendorString() {
return "Copyright Documentum Guy";
}
@Override
public String getVersion() {
return "1.0";
}
@Override
public boolean isCompatible(String version) {
return getVersion().equals(version);
}
@Override
public boolean supportsFeature(String arg0) {
return false;
}
}

The implementation class contains the implementation for the 2 custom methods, for doSave - overriding method from DfPersistentObject (method called when an object of this type is saved), and also implementation for other 4 methods declared in IDfBusinessObject interface.
According to best practices you should override only methods beginning with do (doSave, doCheckin, doCheckout, etc.), not save, checkin, etc. Usualy you need to add some functionality and not change the default one, so remember to call also the superclass method: super.[Overriden_Method].
Compile the implementation class(es) and pack it into a jar (attach-impl.jar): ensure that interface and implementation are in different jars.

3. Create Jar Definition artifacts
In the past DAB (Documentum Application Builder) was used to manage Documentum artifacts, but it was replaced by Composer since version 6.5.
In composer create a new Documentum project and give it a relevant name. Then right click on the Artifacts folder of your project and choose New->Jar Definition (if you don't have it in the list, choose New->Other and choose Jar Definition under Documentum Artifact category). Type the name of the artifact exactly as the name of the jar file (it's not a restriction, but it's a good habit in order to avoid confusion).
You must create 2 jars: interface and implementation. So for this sample we create attach.jar with Type: Interface and attach-impl.jar with Type: Implementation.

4. Create Java Library artifacts (optional)
All the libraries used by your TBO code must be deployed in the repository and related to the TBO module. These jars are packed in artifacts called Java Libraries.
First you must create a Jar Definition per each required jar. Then right-click Artifacts folder and choose New->Java Library, enter a name, then in the JARs field add the jars you need (all jar artifacts from current project are available for selection).

5. Create Module artifact
Now we can proceed to create the TBO module. Right click the Artifacts folder, select New->Module and type the name of the custom type you want to assign the TBO (it won't work if the module name does not match the type name). In the Type combo choose TBO. Next to Implementation Jars field click Add and select the attach-impl.jar, for Interface Jars click Add and select attach.jar, for Class name click Select and choose the class: com.company.project.attach.tbo.impl.Attach (the list of available classes wil be loaded from the implementation jar you've selected). If your TBO implementation code uses some other modules (TBOs, SBOs, etc.) add them in the Required Modules field. In the bottom-left corner choose Deployment tab and add the required Java Libraries (libraries referenced by your TBO code).

6. Deploy the TBO
Ok, now all the artifacts required for the TBO are ready to be deployed into the repository. Build the composer project (if Build Automatically flag is not checked) by choosing Project->Clean from the menu bar. The output will be a DAR file located in bin-dar folder of the project. You can either install the project directly from Composer (right-click on the project and chooose Install Documentum Project) or using DarDeployer (previous name - DarInstaller).
Select the DAR file, repository, enter user name and password and click Install. After the DAR/project is installed, you can restart the JMS and Application Server (and any other client using this repository) and clean BOF cache to ensure that the last versions of jars will be downloaded from the repository.

That's all, now you can test your TBO (To find how to test your TBO changes without re-deploying, check this article: How to test BOF (TBO/SBO) code changes without re-deployment). It will be called when objects of the mapped type will be changed either by DFC code or DQL queries.
If you'll have these steps in front of you, the creation of TBOs will be pretty easy.

Friday, November 30, 2012

Documentum 6.7 CTS troubleshooting: after installation

If you've came here I guess you're in same trouble I was while installing and configuring Documentm CTS in 6.7.x versions.
If you've read the first part of my CTS Troubleshooting Guide, but didn't find the solution, try your luck by reading this article too.
1. Server returns HTTP response code 404
If you find in the logs the following error:
com.documentum.cts.plugin.advancedpdf.AdvancedPDFProcessor - Exception within  processResponse() method: Server returned HTTP response code: 404 for URL: http://localhost/exponentwsa/exponentwsa.asmx/DeleteJob

Open in the browser the following URL: http://localhost:80/exponentwsa/exponentwsa.asmx/DeleteJob
The page shows "HTTP 404.2 - Not Found" error.
Solution: To fix this, open IIS Manager (from Server Manager) then find and open 'ISAPI and CGI Restriction'. The entries ASP.NET vXXX must have Restriction value 'Allowed'.

2. Server returns HTTP response code 500
This error occurs when you find in the logs something like:
com.documentum.cts.plugin.advancedpdf.AdvancedPDFProcessor - Exception within  processResponse() method: Server returned HTTP response code: 500 for URL: http://localhost/exponentwsa/exponentwsa.asmx/AddJob

Open the URL in the browser to be check you really have 500 code response  Then open IIS Manager and go to Application Pools. Check all the pool ExponentWSA is started. Open the pool settings and check that .NET Framework v2.0.XXs is selected. If you changed the value, restart the server.

3. CTS does not create renditions
One possible cause of CTS malfunction is the user set to run the CTS & Adlib services. There are 7 services, from which 3 of them (Adlib FMR, Adlib Process Manager and Documentum CTS Admin. Agent) must run as 'Local System', while the remaining 4 (Adlib Exponent Connector, Adlib Exponent Manager, Documentum Content Transformation Services, Documentum Content Transformation Monitor Services) with [DOMAIN\]SUPERUSER (SUPERUSER - normaly the installation owner).
Restart the CTS services if you've changed any settings.

4. CTS can perform a transformation only when the server is remotely connected (Windows 2008 R2 x64)
There's an known issues for Windows 2008 R2 x64: renditions work only when the server is remotely connected. The process adexps.exe is closed when the install owner logs off.
To fix this, you have to edit the file: ..\Program Files (x86)\Adlib\Process Manager\ProcessManagerInitSettings.xml (default path):
Change lines (all found):
<ProcessLaunchType>LaunchAndWatch</ProcessLaunchType>
<ProcessSessionType>UserSession</ProcessSessionType>
to
<ProcessLaunchType>LaunchAndWatchSession</ProcessLaunchType>
<ProcessSessionType>SystemSession</ProcessSessionType>

Save the file and restart the CTS services.

These are the main issues encountered with CTS. If you face other issues, feel free to post them into comments, I could give some ideas.

Documentum 6.7 CTS troubleshooting: before installing

Installing Documentum CTS (including DTS/ADTS/MTS/etc.) became a bit tricky in versions 6.7.x since it requires more pre-requisite components to be installed:
1. IIS Server
2. ASP.NET
3. Message Queuing
4. .NET Framework

Moreover you have to install a certain set of subfeatures. These services/features can be installed in Windows using Server Manager. Start by adding the role Web Server (IIS), then check the following features/services to be installed for it:

IIS Web Server: ASP.NET, ISAPI



- Under Application Development:
* ASP.NET
* .NET Extensibility
* ISAPI Extensions
* ISAPI Filters



IIS Management Tools: IIS Management Compatibility




- Under Management Tools:
* IIS Management Console
* IIS Management Compatibility + all subfeatures





Message Queuing Services

Then go to Add features section and add the feature Message Queuing Services with the following subfeatures:
* Message Queuing Server
* Directory Service Integration
* HTTP Support





Well this could be enough to go on with the CTS products installation, however your system still might miss some configurations that will prevent your CTS from working properly.
First, go to IIS Manager, open Sites and check you have a site called: Default Web Site
Open the browser (for ex. IE) and open the following address: http://localhost 
If default IIS web site is working you will see IIS welcome page (Welcome in different languages).

Ensure you have Microsoft Office installed (Word, Excel, PowerPoint). 

If you've done all the steps above, you can go on with installation of CTS.
During the installation you might encounter the following error: The installation cannot continue until the following conditions are met: Microsoft Office must be installed.
Well, even you've installed Microsoft Office, the CTS installer might not 'see' it, so you have to manually intervene in Windows Registry. Check if you have the following keys in the Registry (if you don't - create them):
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall\office_outlook
Name: DisplayName
Type: String
Value: Microsoft Office Outlook
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall\office_word
Name: DisplayName
Type: String
Value: Microsoft Office Word
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall\office_excel
Name: DisplayName
Type: String
Value: Microsoft Office Excel
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall\office_powerpoint
Name: DisplayName
Type: String
Value: Microsoft Office PowerPoint

Now the installation should complete successfully. You can start the CTS services. All looks fine, but... no, some errors, again!
If you have a 64-bit OS you might find in the CTS log the following kind of errors:
Unable to instantiate the following MP: com.documentum.cts.plugin.advancedpdf.AdvancedPDFPlugin
java.lang.UnsatisfiedLinkError: D:\Documentum\CTS\lib\JNI_WindowsService.dll: Can't load AMD 64-bit .dll on a IA 32-bit platform

The cause is the CTS installer which, unlike the CS installer, includes just a 32-bit JDK, while for your OS you need the 64-bit one. So, preferably before configuring a CTS instance for a repository, you have to perform the following workaround, (described also in the CTS installation guide):
1. Install a 64-bit 1.6.x JVM in a folder, then rename the Documentum's 32-bit java folder (for ex: %Documentum%\java\1.6.0_17) by adding _32 (%Documentum%\java\1.6.0_17_32), then create in the same path a new folder with the original java name (%Documentum%\java\1.6.0_17). Then copy all the contents from the 64-bit Java to the newly created folder (%Documentum%\java\1.6.0_17)

2. Set environment variables (JAVA_HOME, PATH) to point to this Java version installation.

3. Once the Java has been updated, update the Windows registry value for CTS Admin Agent to use the older 32-bit Java:
Key:
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Appache Software
Foundation\Procrun 2.0\CtsAdminAgent\Parameters\Java
Property Name: JVM
Property Value: C:\PROGRA~1\DOCUME~1\java\1.6.0_17_32\jre\bin\server\jvm.dll

Now you can configure CTS instances for your repository(ies).
If you encounter further issues with CTS performance, see the second part of this CTS Troubleshooting Guide.

Tuesday, October 30, 2012

JMS 6.7 SP1 error connecting to inexisting docbase

Recently I've installed Content Server 6.7 SP1 and the patch 08. When I start the Java Method Service, I find the following error in the logs:
INFO [STDOUT] (main) 10:02:37,932 ERROR [main] com.documentum.mthdservlet.MethodConfig - DfNoServersException:: THREAD: main; MSG: [DM_DOCBROKER_E_NO_SERVERS_FOR_DOCBASE]error: "The DocBroker running on host ([HOST]:1489) does not know of a server for the specified docbase ([INSTALLATION_OWNER])"; ERRORCODE: 100; NEXT: null

So it seems JMS client tries to connect to a docbase which has a name equal to the installation owner of the current docbase. Error in JMS configuration? Nope - checked all configurations, everything's correct. So what's the problem?
Ok, I checked again the stack trace and saw the problem is coming from method populateDocbaseNames of MethodConfig class, which is in mthdservlet.jar library.
Decompiled the jar, opened the method and here's the big surprise from EMC developers:
...
  String str1 = (String)localEnumeration.nextElement();
  if ((!Utils.isNull(str1)) && (str1.toLowerCase().startsWith("docbase")))
...

these lines read all the docbase names from web.xml, found in ...\ServerApps.ear\DmMethods.war\WEB-INF\
Opening the file we see the tags:
    <init-param>
      <param-name>docbase-my_docbase</param-name>
      <param-value>my_docbase</param-value>
    </init-param>

So it should read this docbase and all other available & configured repositories, the tag <param-name> having values of format 'docbase-[DOCBASE_NAME]'.
Ok, but I have only 1 repository configured. Scrolling a bit, I find another tag:
    <init-param>
      <param-name>docbase_install_owner_name</param-name>
      <param-value>dmadmin</param-value>
    </init-param>

Having the code above - startsWith("docbase") it will read also this tag and interpret it as a docbase name. Ok, then I decompiled an older version of mthdservlet.jar and found a bit different code:
  if ((!Utils.isNull(str1)) && (str1.toLowerCase().startsWith("docbase-")))

Here it is! A genious EMC developer removed that dash after docbase: startsWith("docbase-") Well, sh*t happens, even to geniuses.

So while we wait for a patch for this patch :) we can use the old version of mdthdservlet.jar or just ignore this error, as it has no impact on JMS work.

How to recover deleted document

If you want to recover an object deleted in the Documentum repository, you have big chances to recover it's content. Here you'll find the steps to recover a document content even without having many details about it.
Object metadata can be recovered only if you have a database backup, made before the deletion.
Anyway, usually the most important thing is the content itself, not the metadata, so we'll focus on the procedure of recovering the content of removed document.

The first and most important thing to do is to disable the dm_DMClean job, which cleans up orphaned objects, including the content ones. Check the job last execution time: if it ran after the document was deleted, I'm sorry - the content is lost (well, if you have both DB & content backup you can recover anything you want).
Also check the job dm_DMFileScan, usually it's disabled, if it's enabled you'd better disable it untill you recover your document.

Next, our task is to find the dmr_content object which has information about the content location.
As there might be thousands of orphaned content objects, try to get as much information as possible about the deleted document:
1) Date/time of deletion and user who deleted the document
2) Date/time of creation / last modification of content (checkin)
3) File format, aprox. size, object name

The query to get the content objects having no associated metadata objects (dm_sysobject) is:
select * from dmr_content where any parent_id is null

The problem is this query most probably will give you too many results, but I guess you don't want to find the right document when you reach 65 years :)

Now let's see how this information can help you to narrow the results:
1) Date/time of deletion and user who deleted the document
Hoping you have auditing enabled, you can get some information from this audit:
select * from dm_audittrail where event_name='dm_destroy' where time_stamp > date('some date before deletion') and user_id = (select r_object_id from dm_user where user_name='USER_WHO_DELETED')

From the results returned, if you find a record that seems to represent the deleted document, grab the object_name value

2) Date/time of creation / last modification of content (checkin):
select r_object_id,full_format from dmr_content where any parent_id is null and set_time > date([time before creation]) and set_time < date([time after creation])

3) File format, aprox. size, object name (possibly grabbed at step 1):
select r_object_id,full_format from dmr_content where any parent_id is null and full_format='[FORMAT]' and content_size > [MIN_SIZE] and content_size < [MAX_SIZE] and set_file like '%[OBJECT NAME]%'

Note: You can combine filters from point 2 & 3 if you have this information. The more filters you use, the less results you'll have.

Ok, so now you have a list of content objects (hopefully not too big). Now you can get corresponding paths to the files, on the file storage.
For each id in the list generate a DQL command:
execute get_path for '[ID]'
where [ID] is the r_object_id of dmr_content object

Executing the obtained script you get a list of file paths. Copy the results to a file.
Now you have 2 options to getting the files:
1) You can get the content files directly (without creating objects in repository), by obtained paths. Generate a script that will copy all the files to your folder. For example, you can use commands like:
cp [OBTAINED PATH] /target_folder/[COUNTER].[FULL_FORMAT]

where COUNTER is a counter (1..n) - to not have name conflicts during the copy operation.

2) Create new objects in the docbase by generating a DQL with queries like:
create my_type object set object_name='Some identifier', link '[Folder path]', setfile '[PATH]' with content_format='[FORMAT]'

If you recovered more documents, you can open them and find the one you were searching for.
Once you're happy with having recovered the deleted document, don't forget to enable back the dm_DMClean job if you disabled it.

Thursday, October 4, 2012

How to make a custom WDK qualifier

WDK features, definitions and settings can be scoped so they are presented only when the user's context or environment matches the scope definition.
In other words you have a filtering mechanism using qualifiers. WDK provides the following standard qualifiers:
- DocbaseNameQualifier (scope: docbase, name of the docbase to which application connects)
- DocbaseTypeQualifier (scope: type, matches the document type)
- PrivilegeQualifier (scope: privilege, matches user privileges)
- RoleQualifier (scope: role, matches user role)
- ClientEnvQualifier (scope: clientenv, values: "webbrowser", "portal", "appintg", or "not appintg")
- AppQualifier (scope:application, matches the application name)
- VersionQualifier (scope: version, mathces the application version)
- EntitlementQualifier (scope:entitlement, checks entitlement evaluation classes)
- ApplicationLocationQualifier (scope:location, matches the navigation location)

However you might need to use custom scoping, so you must create a custom qualifier.
To create a cusom qualifier, you must perform the following steps:
1) Create the custom qualifier class, which implements the IQualifier interface. Here's a sample:
public class CustomQualifier implements IQualifier {

final static public String QUALIFIER_NAME = "customQualifier";

public String[] getAliasScopeValues(String strScopeValue) {
return null;
}
public String[] getContextNames() {
return new String[] { QUALIFIER_NAME };
}
public String getParentScopeValue(String strScopeValue) {
return null;
}
public String getScopeName() {
return QUALIFIER_NAME;
}
public String getScopeValue(QualifierContext context) {
String sCustomQualifier = "";

// custom code here to find the qualifier value to be set
// this code is called often, so it must not be 'heavy'
// consider using the cache

return sCustomQualifier;
}

2) Add your qualifier definition to app.xml file of your custom layer (usually custom folder):

inside <qualifiers> tag add your qualifier's class full name:
<qualifier>com.mycompany.wdk.qualifier.CustomQualifier</qualifier>

3) Use custom scoping & filtering in your components' xml definitions:
a)
<scope customQualifier="someValue">
....your definitions here...
</scope>

b)
<filter customQualifier="someValue">
    ....your definitions here...
</filter>

That's all. Keep in mind that qualifiers impact application performance because the qualifier's class is called on each read of definitions. So try to avoid adding custom qualifiers if you have other options.

Tuesday, October 2, 2012

How to obtain dfc data directory in DFC


DFC data directory is configured in dfc.properties, in dfc.data.dir property . If it's not specified, the default Documentum path is used.
You might need this location in order to read a configuration file, which normally is stored in config folder, under dfc data directory.

If you want to find the location of this folder, the following DFC code can be used:
// get dfc.data.dir:
String dataDir = DfPreferences.access().getDataDirectory();
// get config folder:
File configDir = new File(new File(dataDir), "config");

Wednesday, August 29, 2012

How to delete a Documentum repository

For many years the repository served us truly and faithfuly, but the time has come... and we don't need it anymore.
Yes, we will delete it! Unfortunately it's not such trivial as running a single wizzard.
Below are the steps to do a complete removal:

0. Check one more time you're deleting the right repository (and it's not live production) :)

1. Drop (uninstall) the Index Agent
1.1 Stop the Index Agent (from DA or Index Admin page)
1.2 On the Index Server host Start the Index Agent Configuration Program, select 'Delete Index Agent', then select the Index Agent (if you have more).
1.3 From Index Server Admin page (ie: http://localhost:16000/admin) delete the corresponding collection for the docbase.

2. Remove DTS / ADTS service from the repository
2.1 Start the ADTS Configuration Utility. Check that your docbroker and docbase are running before proceeding.
2.2 Choose 'Remove an instance of ADTS from a docbase', then select the repository, enter installation owner credentials and finish the procedure.

3. Stop any other applications or/and services that are using the repository

* If you don't know the content files location, check it now before removing the repository (dm_location object)

4. Remove the docbase service
4.1 Launch Documentum Server Manager, on Utilities tab click Server Configuration and following the wizzard instructions.
4.2 Choose Custom configuration, Delete an existing repository, choose the repository and complete the procedure.

5. Delete DB tablespace(s) and schema.

6. Delete docbase files from the Content Server host
6.1 Delete the content files
6.2 Delete the log files

Congratulations! You've done it! Next repository? :)

Tuesday, August 28, 2012

How to clone a Documentum repository

Why cloning a Documentum repository? Well sometimes it is required to perform some tests (new software/versions/features) on the real production data. Development environment might not be very relevant because usually it has a very reduced set of data. There might be also plenty of other reasons.
Cloning a repository means creating an identic copy of it, with no impcat on the original repository.
This activity is not trivial and requires more steps to be performed:
* Note: The supposed OS is Windows, for other OS some steps might require appropriate changes.

1. Create a new DB schema and clone the data from the schema of the repository.
2. Stop the repository service.
3. Copy the content and configuration files from the source Content Server file system. On Windows you can use the robocopy command, like this:
robocopy [SOURCE_PATH] [DESTINATION_PATH] /NP /MIR /SEC /R:10 /W:10 /LOG:[LOG_FILE] >> [OUTPUT_FILE]

 Copy the following paths:
  a) [DOCUMENTUM]\dba\config\[repository_name]
  b) [DOCUMENTUM]\dba\auth\[repository_name]
  c) Folder with content files (obtain it from the dm_location object)

4. Execute the SQL script (on the new DB schema):
-- Set the clone host name
update dm_server_config_s set r_host_name='[NEW_HOST]';
update dm_mount_point_s set host_name='[NEW_HOST]';
update dm_server_config_s set web_server_loc='[NEW_HOST]';

-- Force server to recreate views
update dm_type_s set views_valid = 0;

-- Set the clone content location
update dm_location_s set file_system_path='[CONTENT_PATH_NEW]' where file_system_path='[CONTENT_PATH_OLD]';
update dm_location_s set file_system_path='[CONTENT_PATH_NEW]\replicate_temp_store' where 
file_system_path='[CONTENT_PATH_OLD]\replicate_temp_store';
update dm_location_s set file_system_path='[CONTENT_PATH_NEW]\replica_content_storage_01' where file_system_path='[CONTENT_PATH_OLD]\replica_content_storage_01';
update dm_location_s set file_system_path='[CONTENT_PATH_NEW]\content_storage_01' where file_system_path='[CONTENT_PATH_OLD]\content_storage_01';
update dm_location_s set file_system_path='[CONTENT_PATH_NEW]\thumbnail_storage_01' where 
file_system_path=' [CONTENT_PATH_OLD]\thumbnail_storage_01';
update dm_location_s set file_system_path='[CONTENT_PATH_NEW]\streaming_storage_01' where file_system_path='[CONTENT_PATH_OLD]\streaming_storage_01';

-- Set job server execution
update dm_job_s set target_server='[DOCBASE_NAME].[DM_SERVER_CONFIG.OBJECT_NAME]@[HOST]' where target_server = '[DOCBASE_NAME_OLD].[DM_SERVER_CONFIG.OBJECT_NAME_OLD]@[HOST_OLD]'

-- Note: If the cloned repository was served by more CS instances (so more dm_server_config objects present) you must run a query like this per each server config object.
-- The value [DM_SERVER_CONFIG.OBJECT_NAME] can be obtained with this query: select object_name from dm_server_config

-- Disable the jobs (will be reactivated later)
update dm_job_s set is_inactive=1;

-- ACS config: assuming the repository is served by 2 CS
update dm_acs_config_r set acs_base_url='http://[NEW_HOST]:[[PORT]/ACS/servlet/ACS' where acs_base_url in ('http://[OLD_HOST_1]:[PORT]/ACS/servlet/ACS', http://[OLD_HOST_2]:[PORT]/ACS/servlet/ACS') ;
-- PORT: default is 9080

-- Reset the crypto key
update dm_docbase_config_s set i_crypto_key = ' ', i_ticket_crypto_key = ' ';
delete from dmi_vstamp_s where i_application = 'dm_docbase_config_crypto_key_init';
delete from dmi_vstamp_s where i_application = 'dm_docbase_config_ticket_crypto_key_init';

delete dm_sysobject_s where r_object_id = (select r_object_id from dm_public_key_certificate_s where key_type = 1);
delete dm_sysobject_r where r_object_id = (select r_object_id from dm_public_key_certificate_s where key_type = 1);
delete dm_public_key_certificate_s where key_type = 1;

delete dm_sysobject_s where r_object_id = (select r_object_id from dm_cryptographic_key_s where key_type = 1);
delete dm_sysobject_r where r_object_id = (select r_object_id from dm_cryptographic_key_s where key_type = 1);
delete dm_cryptographic_key_s where key_type = 1;

-- Old fast index configuration cleanup (if FullText is installed)
update dm_ftengine_config_r set param_value='[FULLTEXT_HOST]' where param_name='fds_config_host';
update dm_ftengine_config_r set param_value='[FULLTEXT_HOST]' where param_name='query_engine_host_name';

delete from dm_ftindex_agent_config_s;
delete from dm_sysobject_s where r_object_type=’dm_ftindex_agent_config’;

-- Old ADTS configuration cleanup (if ADTS is installed)
delete from cts_instance_info_s;
delete from cts_instance_info_r; (16 row deleted)
delete from dm_sysobject_s where r_object_type=’cts_instance_info’;

commit;

5. Content Server update

5.1 Update server.ini
Edit the server.ini file located in %DOCUMENTUM%\dba\config\[DOCBASE_NAME] and update the following fields:
database_conn [NEW_DB_INSTANCE]
[DOCBROKER_PROJECTION_TARGET]
host = [NEW_DOCBROKER]
port = 1489

5.2 Re-encrypt the Database password
cd %DM_HOME%\bin
dm_encrypt_password -docbase [NEW_DOCBASE_NAME] -rdbms -encrypt [DB PASSWORD]

5.3 Windows service.
Execute the following commands script to create the Windows Service for the repository (you can save it in a .bat file and run it):
@echo off
setlocal

set docbase=[DOCBASE_NAME]
set instowner=[INSTALLATION_OWNER]
set iodom=[DOMAIN]
set iopw=[PASSWORD]

set binPath=%DM_HOME%\product\[VERSION]\bin\documentum.exe -docbase_name %docbase% -security acl -init_file E:\Documentum\dba\config\%docbase%\server.ini -run_as_service -install_owner %instowner% -logfile E:\Documentum\dba\log\%docbase%.log

sc create DmServer%docbase% binPath= "%binPath%" start= demand DisplayName= "Documentum Docbase Service %docbase%" obj= %iodom%\%instowner% password= %iopw%

sc description DmServer%docbase% %docbase%_clone

endlocal

5.4 Add port numbers
Edit %WINDIR%\system32\drivers\etc\services and add the following entries at the end of the file:
dm_[DOCBASE_NAME] [N]/tcp #Documentum Docbase Service [DOCBASE_NAME]
dm_[DOCBASE_NAME]_si [N+1]/tcp #Documentum Docbase Service [DOCBASE_NAME] (secure service)

* where N can be the incremented last port used in the list

5.5 Windows Registry update

Export the docbase registry key from the Windows registry on the old machine, from branch:
"HKLM\SOFTWARE\Documentum\DOCBASES\[DOCBASE]"

Open the exported file and update the following keys:

"DM_DOCBASE_CONNECTION" => "[DB_INSTANCE]"
"DM_HOME" => "[PATH]\\product\\[VERSION]" (ie: C:\\Documentum\\product\\6.7)
"DOCUMENTUM" => "[DOCUMENTUM_PATH]" (ie: C:\\Documentum)
Save the file with UTF-8 encoding.

Update the registry on the target machine by importing the reg file.

5.6 Create session logs folder:
[DOCUMENTUM]\dba\log\[DOCBASE]

6. Start the repository service

7. Post-cloning activities

7.1 Reset the inline passwords
Inline passwords must be reset (usualy they are not working anymore because of encryption changes)
Use a DQL like this:
update dm_user objects set user_password=user_name where user_source='inline password' [AND user_name not in (exception_list)]

or provide an explicit list of users:
update dm_user objects set user_password=user_name where user_name in (update_list)
Usualy update_list contains users: dm_bof_registry, dm_fulltext_index_user, dmc_wdk_presets_owner and others.

7.2 Install other required products: Index Agent, ADTS instance, etc.

Tuesday, August 21, 2012

How to change object type

Documentum allows creation of custom types, which defines custom attributes in addition to the inherited ones. Once your custom type is defined and created in the repository, you can create objects - which are instances of that custom type.
If you have created objects of a certain (sub) type, you can still change their type. The easiest way is to use the following DQL:
CHANGE current_type [(ALL)] objects to new_type [update_list] [WHERE qualification]
* ALL - change type of all object versions
* update list - list of updates on objects attributes to perform during change type operation

Obviously current_type must be the supertype or subtype of the new_type. Thus, you can move the object instances through the types hierarchy up and down.
Ok, if you move from a supertype to subtype you get additional attributes that are empty. But what happens when moving from subtype to supertype? The custom attributes' values of the subtype are lost (you can use the update list if you want to copy the values to other attributes, that are not lost).

You should take note of the following constraints on this operation:
1. You can only change the type of objects that are subtypes of dm_sysobject. (If you want to change the object type of not a dm_sysobject, follow the procedure described here: URL).
2. The object's current type and the new type must have the same type identifier (which is the first two characters of the object ID. For example, dm_document has 09).
3. The old and new types can't be at the same level in the type hierarchy. The new type must be a supertype or subtype of the current type.
For example, you have a my_car and its subtypes my_toyota and my_lexus. In order to change an object of my_toyota to type my_lexus, you must do it in 2 steps:
change objects my_ford to my_car;
go;
change objects my_car to my_lexus;
go;

4. You must have DELETE permission on the objects you are changing.

Thursday, May 31, 2012

Useful DQL queries


DQL to get all empty (sub)folders in a cabinet:
select * from dm_folder where r_link_cnt=0 and folder('/Temp',descend)

DQL to get list of documents and their folder path:
select distinct d.r_object_id,d.object_name,f.r_folder_path from dm_document d, dm_folder f where any d.i_folder_id=f.r_object_id and r_folder_path is not nullstring enable(ROW_BASED)

DQL to display the supertypes hierarchy brach of the specified type:
select r_supertype from dmi_type_info where r_type_id = (select r_object_id from dm_type where name='my_type')

DQL to get number of modified documents for each month:
select datetostring(r_modify_date,'mm/yyyy'),count(*)from dm_document [WHERE condition] group by datetostring(r_modify_date,'mm/yyyy')

DQL to execute an SQL query:
execute exec_sql with query = 'create or replace my_view (cod) as (select some_id from my_table)'

DQL to get the object type of a document:
select r_object_type from dm_document where r_object_id='092e6adc800001f0'

DQL to get the number of sysobjects for each object type:
select count(*),r_object_type from dm_sysobject group by r_object_type

DQL to create a DB index on a type attribute:
EXECUTE make_index WITH type_name='dmi_workitem',attribute='r_workflow_id'

DQL to see Documentum sessions on current Content Server:
execute show_sessions

DQL to get ids of documents deleted in a time interval:
select * from dm_audittrail where event_name='dm_destroy' where time_stamp > date('date before') and time_stamp < date('date after')

DQL to get the user that deleted a document:
select * from dm_user where r_object_id= (select user_id from dm_audittrail where event_name='dm_destroy' and audited_obj_id='ID OF DELETED OBJECT')

Friday, April 27, 2012

How to set the same content on multiple objects

There are situations when you want to set the same content on more objects. If you use the standard approach of setting content on each object, because a content object (dmr_content) will be created per each document object. On one hand it's a bit of redundancy, isn't it? On the other - if you change one content, the other (related) contents won't be updated.
That's why it's more convenient to set (link) one content object to multiple document objects, action known as binding.
You can bind an existing content of an object to another object via DFC code or an API call.

API> bindfile,c,[TARGET_ID],0,[SRC_ID],0

where:
TARGET_ID - the id of the sysobject you are binding to
0 - the position of the content in the object (0 is the first/primary content)
SRC_ID - the id of the source object that the content is assigned (linked) to
DFC code:
IDfId childId = new DfId("[target id]");
IDfId parentId = new DfId("[source id]");
IDfSysObject sysobject = session.getObject(childId);
sysobject.bindFile(0, parentId, 0);
sysobject.save();

Wednesday, April 25, 2012

How to unlock a job in running state

There are situations when a running job fails and remains locked in running state. It usually happens when the JMS is stopped or other fatal errors occur.
To unlock such a job, the following steps should be performed (in this order until it's unlocked).

1) Check if the job is locked, using the following DQL:
select r_lock_date from dm_job where job_name='[JOB_NAME]'

if the returned value is not nulldate, the job is locked. Unlock it by executing the API command:
unlock,c,[r_object_id]

2) If the job is still in running state, perhaps the job's process is still active (hung) and must be killed.
a. Get the process ID, using the DQL:
select a_last_process_id from dm_job where object_name='[JOB_NAME]'
b. Using Process Explorer (or similar) locate the process by id and kill it
c. Check the job state (in DA)
3) Checked the job and it's still running? Don't give up, we're almost done!
Check the a_current_status attribute value (DQL: select a_current_status from dm_job where object_name='[JOB_NAME]').
If it's 'STARTED', we should change it to 'FAILED'. Use the following DQL:
update dm_job object set a_current_status='FAILED' where object_name='[JOB_NAME]'

4) If your job still didn't get rid of the running status, perhaps it has the reference to the application that locked it.
Execute the DQL query:
select r_object_id, a_last_invocation, a_last_completion, a_special_app from dm_job where (((a_last_invocation IS NOT NULLDATE)
 and (a_last_completion IS NULLDATE)) or (a_special_app = 'agentexec')) and (i_is_reference = 0 OR i_is_reference is NULL)
 and (i_is_replica = 0 OR i_is_replica is NULL)

If the query returns your job, execute the following DQL:
update dm_job objects set a_special_app='' where object_name='[JOB_NAME]'

That's it, your job is unlocked now and ready to be run again.

Monday, April 23, 2012

How to change a job's scheduled run time

Even if you have changed the scheduled time for a job to run (using DA, on the job properties, schedule tab), you will notice the the job continues to run at the old time.
What's wrong? Let's have a look behind the curtains!
The agent_exec pols the a_next_invocation attribute of the dm_job object to determine when the job should run next.
In fact, when you update the job schedule, this attribute is not updated (it is when the job runs). So, you have to update it manually and the 2 options are:

1. API (using DA or other client):
retrieve,c,dm_job where object_name='[JOB_NAME]'
set,c,l,a_next_invocation
04/09/2012 19:00:00
save,c,l
2. DQL:
update dm_job object set a_next_invocation=DATE('04/09/2012 19:00:00','mm/dd/yyyy hh:mi:ss') where object_name='[JOB_NAME]'

If your changes had no effect, perhaps you have to reinitialize the Content Server process (thread). Use the following API command for that:
reinit,c

LDAP Sync - Solve user name conflicts

When an AD (Active Directory) user object name is changed (userPrincipalName,sAMAccountName, or other attributes) and later another AD object is created with the original name of the first user object, a conflict in the LDAPSync'ed repository occurs.
This is caused by the fact that user_name attribute of the dm_user objects is not changed during LDAP Synchronization (by default user rename option is disabled in LDAP config object) - thus another object with same value can't be created (as this field is unique).
Another conflict situation could be when 1 repository is sync'ed against several AD domains and some user name values might be equal across domains.

The steps to solve this conflict are:
1. Rename the old user object in the repository
(user_login_name has changed while user_name still holds the old value).
For this: open DA, Users section, search for the user, right-click on it and choose Reassign User option.
In the Reassign field enter the new user name (should be current user_login_name value).
Select the option 'Run the Reassign job Now'.
If the user has locked objects, perhaps it's better to select Checked Out Objects Leave all locked (though if the user is inactive, perhaps it's better to unlock).
Press Ok to perform the changes, which will start the dm_UserRename job shortly.
2. Check the dm_UserRename job report
Search for the dm_UserRename job and check if it is running or already completed (after your change at step 1), or if it didn’t start yet (CS can be loaded with other jobs) wait until it runs. Notice that usually the job is in inactive state, but this is normal.
After the run has completed (see Last Run column value), view the job report (from context menu) and check that the user has been renamed, if there were some issues or if some repository objects still have references to the old value.

3. Run immediately the dm_LDAPSynchronization job (optional)
Running this job manually is not mandatory as it usually runs by schedule. However, if the user updates are urgent, the job can be run manually, but an additional trick must be performed in order to not disturb the time intervals of the synchronizations. Perform the following steps:

a) Retrieve the a_last_run and a_last_no from the ldap config object and store them:
DQL> select a_last_run,a_last_no from dm_ldap_config
a_last_run                            a_last_no        
================ ============
20091105210022.0Z         195290415        

b) Store the current value of dm_ldap_config.per_search_filter, then change it (to update only the 1 required user) to:
DQL> update dm_ldap_config object set per_search_filter='(&(userPrincipalName=MY_USER*)(mail=*))'
(assuming 'MY_USER' is the new user to be created, which failed due to the conflict. You can leave the old filter and just add an additional contidion: (userPrincipalName=MY_USER*) )

c) Run the dm_LDAPSynchronization job. Check the report and see if the user has been created properly. Also check the user is indeed created (using DA/samson/other client)

d) Restore previous ldap config object settings:
update dm_ldap_config object set a_last_run='20091105210022.0Z',set a_last_no='195290415', set per_search_filter='[ORIGINAL_VALUE]'

Tuesday, March 6, 2012

How to restart the Documentum ADTS (CTS) services


The Documentum ADTS (Advanced Document Transformation Services) is a suite of services (which analyse and transfrom repository content) that must be restarted in a defined order.
Below is the correct way to restart the ADTS services:

Change the startup type to 'Disabled' and stop the services:
1. Adlib Exponent Connector
2. Adlib Exponent Manager
3. Adlib FMR
4. Adlib Express Server
5. Documentum Content Transformation Monitor Service
6. Documentum CTS Admin. Agent
7. Documentum Content Transformation Services
8. Clean the DFC cache (for ex: DM_HOME\dfcdata\cache)

Change the startup type to 'Auto' (or Manual) and start the services:
9. Change startup type to manual for Adlib FMR
10. Adlib Express Server (the Adlib FMR service will start automatically)
11. Adlib Exponent Manager
12. Adlib Exponent Connector
13. Documentum Content Transformation Services
13* Check the logs (mainly CTS_log.txt and AdvancePDF_log.txt)
14. Documentum Content Transformation Monitor Service
15. Documentum CTS Admin. Agent

Monday, March 5, 2012

How to login to the repository with a ticket

Using a superuser account you can login to the repository with any existing user account using a ticket. In order to generate a login ticket, use the following API command:
getlogin,c,[user_name][,scope][,timeout_period]

You will obtain a ticket like this:
DM_TICKET=AAAAAgAAAOQAAAAKAAAAFUeUZYRHlGawAAAAOGxzY21zAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEZyYW4gU2Nod2lldHprZQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGxzY21zAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGx2Y21zMDEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGNlaWxpbmcxMQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAM0pyOTdRTDJOakFUbVhwak15Y21CWDdLbG5zYnM2aUVUU1pXUy9kbHdwWjlzcllmaGIrNFZnPT0=


Now you can connect to the repository with this user using the ticket (the whole part, including DM_TICKET=) as the password: via Webtop, DA, Samson, DQMan or any other Docuentum client.
The ticket will be available for the time_period specified, or the default one (set in dm_docbase_config.login_ticket_timeout).

Saturday, February 18, 2012

How to start a Documentum Workflow from DFC

Launching and running a Documentum Workdlow from DFC code is not such a trivial task.
Below is a DFC code sample of starting a Workflow and attaching a package to it:

public void startWorkflow(IDfSession session, String processName, String wfName, String supervisorName) throws DfException {
if (processName == null || processName.equals("")) {
DfLogger.error(this, "the process is empty", null, null);
throw new IllegalArgumentException("The process name not specified");
}

IDfProcess process = (IDfProcess) session.getObjectByQualification("dm_process where object_name = '" + processName + "'");
if (process == null) {
DfLogger.error(this, "process not found!", null, null);
throw new DfException("startWorkflow - No such process: " + processName);
}

IDfWorkflowBuilder workflowBuilder = session.newWorkflowBuilder(process.getObjectId());
workflowBuilder.getWorkflow().setObjectName(wfName);
workflowBuilder.getWorkflow().setSupervisorName(supervisorName);
if ((workflowBuilder.getStartStatus() != 0) || (!(workflowBuilder.isRunnable()))) {
DfLogger.warn(this, "startWorkflow - workflow '" + wfName + "' is not runnable or StartStauts=0!", null, null);
throw new DfException("cannot start Workflow!");
}
workflowBuilder.runWorkflow();

// Adding attachments:
IDfList attachIds = new DfList();
attachIds.appendId(new DfId("09024a8580235bc4"));

IDfList startActivities = workflowBuilder.getStartActivityIds();
int packageIndex = 0;
for (int i = 0; i < startActivities.getCount(); i++) {
IDfActivity activity = (IDfActivity) session.getObject(startActivities.getId(i));
workflowBuilder.addPackage(activity.getObjectName(), activity.getPortName(packageIndex),
activity.getPackageName(packageIndex), activity.getPackageType(packageIndex), null, false, attachIds);
}
}

How to save a SysObject with new version in DFC

If you want to save an (udpated) object with a new version, you won't find a method for that in the IDfSysObject implementation.
You will have to do 2 operations: checkout, then a checkin specifying a new version.
Well, there are 2 approaches to do that: using the IDfCheckoutOperation and IDfCheckinOperation operations or methods of the IDfSysObject class: checkout() and checkin(boolean keepLock, String versionLabels).
The second approach is quite simple as requires just 2 lines of code, while the first one is more complex, so let's see the code sample.
Beware: if you don't set the 'CURRENT' version on your new version, it will be the last, but not the current one. So it will be 'hidden'.
Find below code samples for the operations.

public IDfId saveAsNewVersion(IDfSysObject object) throws DfException {
try {
checkOut(object);
// update object here if necessary
IDfClientX clientX = new DfClientX();
IDfCheckinOperation ciop = clientX.getCheckinOperation();
ciop.setCheckinVersion(IDfCheckinOperation.NEXT_MAJOR);

IDfCheckinNode ciNode = (IDfCheckinNode) ciop.add(object);
// if you don't specify the CURRENT version this document won't be the current version
ciNode.setVersionLabels("CURRENT");
if (! ciop.execute()) {
DfLogger.error(this,"saveAsNewVersion: Save signed document with new version failed!", null, null);
}
IDfId id = ciNode.getNewObjectId();
DfLogger.debug(this, "Signed document saved with new version, ID: " + id.getId(),null,null);
return id;
} catch (DfException e) {
DfLogger.error(this, "",null,e);
throw e;
}
}

public void checkOut(IDfSysObject object) throws DfException {
IDfClientX clientX = new DfClientX();
IDfCheckoutOperation coop = clientX.getCheckoutOperation();
IDfCheckoutNode coNode = (IDfCheckoutNode) coop.add(object);
if (coNode == null || ! coop.execute()) {
throw new DfException("Could not checkout document: " + object.getObjectId());
}
}

Tuesday, January 24, 2012

How to hide Add Repository browser tree node in Webtop

Probably many developers faced the problem: how to hide the 'Add repository' node from the browser tree? No, there's no OOTB configuration for this.
The WDK development Guide says you have to make your custom component, with your class, jsp and even a tld. That's cumbersome!
There's a trick that will hide that node faster: just copy the browsertree.jsp into your custom layer, add a browsertree_component.xml definition (perhaps you already have it) extending it from the webtop layer and set your custom browsertree.jsp page. Now open your custom browsertree.jsp file and add at the end of it, inside the last javascript tag, the following JS code:

var addRepository = findMatchingElements('selectrepository','div');
addRepository.style.visibility='hidden';

function findMatchingElements(toMatch, tagname) {
var reMatch = new RegExp( toMatch, "i" ); // match and ignore case
if ( tagname == null ) tagname = "*"; // if not tagname passed, search all
var elems = document.getElementsByTagName(tagname);
for ( var e = 0; e < elems.length; ++e ) {
if (elems[e].id.match(reMatch))
return elems[e];
}
}

That's all! Deploy and see it ... hidden.