Sunday, June 16, 2013

D2 4.1 custom logo

D2 has a standard logo which cannot be changed from configuration at the moment (though it is planned to be so from version 4.2).
You can change the D2 logo in version 4.1, but it requires customizing the D2 webapp by modifying a D2 CSS file. You can change the logo both on the login frame and menu bar. Here is the procedure to replace the D2 logo:

1. Prepare your custom D2 logo: make it in PNG format, 70x50 pixels and transparent background
2. Copy your custom logo (in our case d2_custom.png) into location resources/com/emc/x3/client/resources/images/logo
3. To replace the D2 logo in the menu bar open the file resources/themes/slate/css/xtheme-x3.css and search for "img.x3_portal_logo" (usualy line 61). In the CSS class add the style part marked in bold:

img.x3-portal-logo {
background-color: transparent !important;
z-index: 2;
width: 80px;
/* CSS customization start */
background-image: url("/D2/resources/com/emc/x3/client/resources/images/logo/d2_custom.png");
background-repeat: no-repeat;
height: 0 !important;
overflow: hidden;
padding: 60px 0 0;
/* CSS customization end */

}


4. To replace the D2 logo on the login window, in the same css file search for ".x3-login-window-banner" (usualy line 473) and add a new style, the one marked in bold:

.x3-login-window-banner {
/* height: 50px; */
background: -moz-linear-gradient(center top , #707070, #000000) repeat scroll 0 0 transparent;
background: -webkit-gradient(linear, left top, left bottom, from(#707070), to(#000000));
filter: progid:DXImageTransform.Microsoft.Gradient(StartColorStr=#707070,EndColorStr=#000000,GradientType=0);
border-bottom: 2px solid #CCCCCC;
/* the following props are temporary until we get logo img */
padding: 0 0 0 10px;
}

/* CSS customization start */
.x3-login-window-banner img {
background-image: url("/D2/resources/com/emc/x3/client/resources/images/logo/d2_custom.png");
background-repeat: no-repeat;
height: 0 !important;
overflow: hidden;
padding: 60px 0 0;
width: 80px;
}
/* CSS customization end */


After you make the changes, restart the application server, cleaning the cache, clean your browser(s) cache, then check if your custom D2 logo is correctly displayed.

Wednesday, June 12, 2013

D2 back-end tweaking

D2 is a new EMC Documentum application which comes with a brand new concept and technology. It's really dynamic, user-friendly and fast. In order to not experience a downgrade of performance as time passes, we should tweak and fine tune its environment. In this article we'll focus on the back-end, Documentum side. There are several easy tunings that can optimize your Documentum D2 system:

1. Disable unused jobs
The following jobs can be inactivated when you don't use certain features:

a) If you don't use external tasks (sent/received via email) in your Workflows:
- D2JobWFReceiveTaskMail
- D2JobWFSendTaskMail

b) If you don't use Workflows at all:
- D2JobWFReceiveTaskMail
- D2JobWFSendTaskMail
- D2JobWFCleanerWorkflows
- D2JobWFLaunchScheduledWorkflows
- D2JobWFCleanPopHistory
- D2JobWFFollowUpTaskNotif
- D2JobWFWorkflowsNotifications
- D2JobDelegation

2. Review jobs schedule
Many D2 jobs are scheduled to run often (even every 5 minutes), so if some features are not used very often, or refresh rate is not very important, consider encreasing scheduled run rate.

3. Activate and schedule Administration Jobs
Ensure the following Documentum Administration jobs are active and run on a regular basis:
- dm_DMClean (removes deleted and orphaned objects from the docbase)
- dm_DMFilescan (removes deleted or orphaned content files from the file system)
- dm_LogPurge (removes server and session logs from the docbase and file system)
- dm_ConsistencyChecker (runs lots of integrity checks on the docbase)
- dm_UpdateStates (update database table statistics and repairs fragmented tables)
- dm_QueueMgt (deletes dequeued Inbox (dmi_queue_item) items from the docbase)

4. Disable/modify auditing when possible
D2 is pretty dynamic environment, objects are saved/modified many times, many jobs run and pretty often. This leads to a considerable amount of audit entries being created, which impacts the repository performance.
If you don't need auditing for all objects in the repository for the default set of events, remove the event dm_default_set from audit management. You can add this set and other events for custom types used by your applications, or even a specific set of events for custom types.
You can remove default auditing from Documentum Administrator, or by using unregsiter API command.

5. Review indexing configuration
If you have fulltext indexing enabled, check what types are configured to be indexed and try to narrow the count as possible (don't index global types like dm_sysobject, dm_document, etc.).

Monday, June 3, 2013

How to insert a node in browsertree in Webtop

Adding and modifiying Webtop (or other WDK application) browsertree nodes is done by customizing the browsertree component. To achieve this, you must do the following:
In Webtop custom layer, in config folder (or its subfolder, according to your structure) create a new file browsertree_component.xml (or find the existing one), which will override the brosertree component configuration. Here's a sample of how the configuration should look like:
<config version="1.0">
  <scope>
    <component modifies="browsertree:webtop/config/browsertree_component.xml">
      <insertafter path="nodes.docbasenodes.node[componentid=inboxclassic]">
        <node componentid="custom_node">
          <icon>customicon_16.gif</icon>
          <label>Custom Node</label>
        </node>
      </
insertafter>
      <!-- below optional parts -->
      <replace path="pages.start">
        <start>/custom/webtop/browsertree.jsp</start>
      </replace>
      <replace path="nlsbundle">
        <nlsbundle>com.package.custom.BrowserTreeNlsProp</nlsbundle>
      <replace>
    </component>
  </scope>
</config>


In this example we modify the browsertree component configuration from webtop layer by inserting a new node after the node inboxclassic. Our node will display the "custom_node" component, which must be a WDK component. We also replaced the browsertree standard jsp layout with out custom jsp. Finally, we've defined a custom NLS resource bundle for localizable strings.
Note that we've used definition modification (introduced in WDK 6.x), but the same result can be achieved by extending the component definition from parent layer (although it usually requires copying big xml parts from parent definition).

Friday, May 31, 2013

How to install DAR into Documentum repository

Documentum artifacts (jobs, methods, modules, lifecycles, custom types, etc.) are created in a Composer project and built into a DAR file, which consequently must be installed into the repository using DARDeployer (in previous versions called DARInstaller) or Composer Headless (using scripts).
Note: the Composer project artifacts can also be installed into the repository directly from Composer, by choosing Install Documentum Project... item in the context menu (right click on project): specify repository name, user name, password, domain (optional), press login, then install.

To install a DAR file, open DarInstaller (or DARDeployer in old versions) and fill the fields:
 - DAR: the DAR file to install (required)
 - Input File: the installation parameters file (file with custom installation parameters)
 - Locale Folder: Folder with localization
 - Log File: log file location (in case you don't want the default location)

Once filled these fields, select the docbroker host, enter a port number (default: 1489), press Connect to get the repositories list. Then select a repository from the list, enter user name (superuser), password and domain (optional). When ready, click the Install button. The installation process starts and once it's completed you will see a message telling the dar was installed successfully. In case of errors, you will be notified with a proper message (no changes take place as the DAR installation is done in a transaction). In this case check the logs and try to solve the issue (sometimes it might be a trivial connection lost with the CS), then reinstall the DAR file. Sometimes, before reinstalling the DAR, Data Dictionary re-publish could be necessary (using dm_DataDictionaryPublisher job or just with API command: publish_dd,c).

If you can't use DARDeployer (for ex. non-Windows OS: Linux, AIX, etc.), you can install the DAR using Composer Headless. For this, you must create 3 files:

1. darinstall (script file)
Here's a template of the script file (Linux):
#!/bin/sh

export JAVA_HOME=/opt/documentum/shared/java/1.6.0_17
export ECLIPSE=/opt/documentum/product/6.7/install/composer/ComposerHeadless
export WORKSPACE=/opt/project/darinstall-workspace
$JAVA_HOME/bin/java -Xms512m -Xmx1024m -cp $ECLIPSE/startup.jar org.eclipse.core.launcher.Main -clean -data $WORKSPACE -application org.eclipse.ant.core.antRunner -buildfile darinstall.xml


2. darinstall.xml (Ant buildfile)
<?xml version="1.0"?>
<project name="darinstall" default="install">
  <description>ant script to install a DAR file</description>
    <property file="darinstall.properties"/>
    <property name="dar.folder" value="." />
    <target name="install">
      <antcall target="_install.dar">
        <param name="dar.file" value="${dar.folder}/CustomTypes.dar" />
      </antcall>
      <!-- other dars, in order of dependency -->
    </target>
    <target name="_install.dar">
      <echo message="============================"/>
      <echo message="Installing DAR ${dar.file}"/>
      <echo message="============================"/>
      <emc.install dar="${dar.file}" docbase="${dctm.docbase}" username="${dctm.username}" password="${dctm.password}"/>
    </target>
</project>


3. darinstall.properties (configurations,credentials)
dctm.docbase=project_dev
dctm.username=dmadmin
dctm.password=dmadmin
dar.folder=/opt/project/darinstall


Now just launch the first script (darinstall), wait until it completes, then check installation logs. If you find no errors/warnings, it's done!

Friday, May 10, 2013

Group memberships updates reflected in delay

In a multiple Content Servers Documentum environment you can encounter the following issue: after you change a group memberships (add/delete users and groups) using one CS, the changes are not reflected when you connect to the repository with other Content Servers, for a period of time (up to several hours). So the Content Server that performs the memberships change reflects the changes immediately, while the others will have a delay, usually a big one.
Why is this happening?
That's because the caching mechanism the Content Servers are using. The group memberships are cached from the DB into Content Server memory, so the CS does not query the DB every time it needs this information. When you are connected with a CS and a group is updated, this CS takes care to update its cache accordingly for that group. But what if you have more CS for the same repository? They will not update their cache, because they don't know there was a change. That's why on the other CS the group change will not be reflected until they don't refresh their cache.
How to solve that?
There is a configuration in server.ini which the CS Installation Guide does not pay attention to: upd_last_chg_time_from_db . The CS Administration Guide provides the following description for the key:
Optional key. Specifies that all Content Servers in a clustered environment have timely access to all changes in group membership. Valid values are: * TRUE: Should only be used for environments with multiple Content Servers. The value should be set to TRUE for all running Content Servers. * FALSE: The default value.
So the solution is to modify the server.ini files on all Content Severs setting this key value to TRUE:
# A boolean flag specifying whether user entries and groups list are synchronized from db.
upd_last_chg_time_from_db=T

Wednesday, May 8, 2013

How to test BOF (TBO/SBO) code changes without re-deployment

Documentum Business Object Framework (BOF) version 2.0 implies deploying of all modules implementation into the repository, as jar artifacts. These jars are downloaded and cached on demand, when the corresponding module is being called.
The need of re-deployment of BOF modules (TBOs or SBOs) each time we make a change in the code might cause important delays in the development process: recompiling composer project, installing it, restarting JMS each time consumes a lot of time.
Is there a way to test the BOF code fast, without redeploying it? Yes, it is!
You can cheat the DFC client and replace the cached jars with a newer versions. Here's how you can do it.
After having called the BOF module (to ensure it's cached), go to the BOF cache folder: its location on the client machine can be specified in dfc.properties file, in the dfc.cache.dir property (default is cache subdirectory of dfc.data.dir). Inside you'll find a folder structure like this: [DFC_VERSION]/bof/[DOCBASE_NAME]. In the docbase folder you'll find more jars having the name [DMC_JAR_OBJECT_ID].jar. You have to locate the one containing your BOF code. For this, you can run the DQL command:
select r_object_id from dmc_jar where object_name='[JAR NAME]'
Or, you could also change the property dfc.bof.cache.append_name property so that the cached jars names will contain also their object names (on next caching).

Once you locate your jar, you have 2 options:
a) replace the cached jar with the new one, built in your IDE, keeping original name
b) open the jar (like a common archive) and replace only the modified class(es)
Remember to stop your DFC client to unlock the cached jars, otherwise you can't modify them.
Now start your DFC client and test your changes.

Code fast, test faster!

Tuesday, May 7, 2013

Custom job method DFC code sample

Implementing a custom Documentum job method requires writing some non trivial DFC code. Below you'll find a sample using an abstract class which can be extended and re-used for any implementation of a job method.
Connection/helping methods are encapsulated into the AbstractJobMethod abstract class, so you can write custom classes that extend it. The only method to be implemented is execute(), where you have an active session (handled by the abstract class) and can perform the required actions against the repository.
The abstract class uses the CustomJobArguments, which is an extension of standard DfStandardJobArguments, providing some additional methods to access the passed job arguments.
Check How to create a custom job in Documentum for details on creation of custom job and method.

public class SomeCustomMethod extends AbstractJobMethod {
  public int execute() throws Exception {
    DfLogger.debug(this, "Started custom method implementation with user: " + session.getLoginUserName(), null, null);
    // custom method logic goes here
    return 0;
  }
}
////
public abstract class AbstractJobMethod implements IDfMethod, IDfModule {
// for BOF 1.0 the class must implement only the IDfMethod interface, the jar is deployed on JMS
// for BOF 2.0 it implements also IDfModule because it will be deployed into the repository and will run as a BOF module

  protected CustomJobArguments jobArguments;
  protected IDfTime startDate;
  protected IReportWriter reportWriter;
  protected IDfSession session;
  protected IDfSessionManager sessionManager;
  protected IDfClientX clientx;
  protected IDfClient client;

  public int execute(Map args, PrintWriter writer) throws Exception {
    setup(args, writer);
    try {
      int retCode = execute();
      printJobStatusReport(retCode);
      return retCode;
    } catch (Exception e) {
      DfLogger.error(this, "", null, e);
      reportWriter.emitLine("Error encountered: " + e.getMessage());
      reportWriter.closeOut(false);
      throw e;
    } finally {
      if (reportWriter != null)
      reportWriter.close();
      if (session != null)
        sessionManager.release(session);
    }
  }

  // the only method to be implemented in concrete subclasses
  public abstract int execute() throws Exception;

  public void setup(Map args, PrintWriter writer) throws DfMethodArgumentException, DfException, Exception {
    startDate = new DfTime();
    jobArguments = new CustomJobArguments(new DfMethodArgumentManager(args));
    setupMethodFactory(writer);
    String username = jobArguments.getUserName();
    String password = jobArguments.getString("password");
    String domain = jobArguments.getString("domain");
    String docbase = jobArguments.getDocbaseName();
    setupSessionManager(username, password, domain, docbase);
  }

  private void setupSessionManager(String username, String password, String domain, String docbase) throws DfServiceException, DfException {
    DfLogger.debug(this, String.format("setupSessionManager-> username[%s] password[%s] domain[%s] docbase[%s]", username, password, domain, docbase), null, null);
    clientx = new DfClientX();
    client = clientx.getLocalClient();
    sessionManager = client.newSessionManager();
    IDfLoginInfo loginInfoObj = clientx.getLoginInfo();
    loginInfoObj.setUser(username);
    if (password != null && !password.equals(""))
        loginInfoObj.setPassword(password);
    loginInfoObj.setPassword(password);
    loginInfoObj.setDomain(domain);
    sessionManager.setIdentity(docbase, loginInfoObj);
    session = sessionManager.getSession(docbase);
  }

  private void setupMethodFactory(PrintWriter writer) throws Exception {
    try {
      IDfId jobId = jobArguments.getJobId();
      ReportFactory reportfactory = new ReportFactory();
      if (writer != null) {
        DfLogger.debug(this, "uso reportFactory", null, null);
        reportWriter = reportfactory.getReport(jobArguments.getDocbaseName(), jobArguments.getUserName(), "", jobArguments.getMethodTraceLevel(), jobId, writer);
      } else {
        DfLogger.warn(this, "writer == null, uso SimpleReportWriter", null, null);
        reportWriter = new SimpleReportWriter();
      }
    } catch (Exception e) {
      DfLogger.error(this, "Failed to create report writer. Error: " + e.getMessage(), null, e);
      throw e;
    }
  }

  private void printJobStatusReport(int retCode) throws Exception {
    reportWriter.emitLineToReport("Return Code-> " + retCode);
    String jobStatus = null;
    IDfTime end_date = new DfTime();
    long min_duration = Utility.timeDiff(startDate, end_date) / 60L;
    if (retCode == 0)
      jobStatus = "Custom Job completed at " + end_date.asString("yyyy/mm/dd hh:mi:ss") + ". Total duration: " + min_duration + " minutes.";
    else if (retCode > 0)
      jobStatus = "Custom job completed with Warnings at " + end_date.asString("yyyy/mm/dd hh:mi:ss") + ". Total duration: " + min_duration + " minutes.";
    else
      jobStatus = "Custom job completed with Errors at " + end_date.asString("yyyy/mm/dd hh:mi:ss") + ". Total duration: " + min_duration + " minutes. Check job report for details.";
    updateJobStatus(jobStatus, jobArguments.getJobId());
    reportWriter.closeOut(retCode >= 0);
  }

  public void updateJobStatus(String sJobStatus, IDfId jobId) throws Exception {
    if (session == null) {
      DfLogger.error(this, "setJobStatus: (session==null)", null, null);
      throw new NullPointerException("setJobStatus: (session==null)");
    }
    try {
      IDfPersistentObject job = session.getObject(jobId);
      if (job == null)
        throw new DfException("Failed to retrieve dm_job object from id '" + jobId.getId() + "'.");
      job.setString("a_current_status", sJobStatus);
      job.save();
    } catch (Exception e) {
      throw e;
    }
  }
}
////
public class CustomJobArguments extends DfStandardJobArguments {

  protected IDfMethodArgumentManager methodArgumentManager;
 
  public CustomJobArguments(IDfMethodArgumentManager manager) throws DfMethodArgumentException {
    super(manager);
    methodArgumentManager=manager;
  }

  public String getString(String paramName) throws DfMethodArgumentException {
    return methodArgumentManager.getString(paramName);
  }

  public int getInt(String paramName) throws DfMethodArgumentException {
    return methodArgumentManager.getInt(paramName).intValue();
  }
}

How to create a custom job in Documentum

Documentum jobs allow to process some objects automatically, by schedule, according to the business logic. They are pretty similar to scheduled tasks on Operating Systems. When started, the job calls a Documentum method which executes the desired logic. The methods are executed on the Java Method Server (JMS), which is part of the Content Server installation.
A job consists of several items:
 1) Job Method implementation code
 1.1) for BOF 2.0: corresponding implementation jar(s) and module
 2) dm_method object - which holds information about the method, implementation class, etc.
 3) dm_job object - which holds information about job, its schedule, last and next invocation, current status, the method it runs, etc.

Here's how to create all of the required items for the job.

1. Job method implementation code
First of all you have to write the code that will implement the logic the job method will perform. There are 2 approaches of deploying this code: BOF 1.0 or 2.0.
With BOF 1.0 the method classes are deployed manually on the JMS (path: [JBOSS]\server\DctmServer_MethodServer\deploy\ServerApps.ear\DmMethods.war\WEB-INF\lib), while BOF 2.0 requires you to deploy the implementation jars into the repository and associate them with a BOF module.
For both approaches you can find job method implementation classes samples here: Custom job method DFC code sample
When you're done with coding, compile the implementation classes and build the jar.

1.1 Implementation jar(s) and module (for BOF 2.0)
Open Composer and create the follosing artifacts:
a) Jar: the artifact name should be equal to the jar name (to avoid confusion), the jar type is Implementation.
b) Module: the module name must be equal to the full class name (with packages), modul type: Standard Module, choose the implementation jar created at previous step (a), select the class name (equal to module name). In addition specify all the required modules and Java libraries used by your implementation class(es).

2. dm_method object
a) Using Composer
In Composer create a Method artifact, enter the method name, select type: java, Command: module name (equal to implementation full class name), Run Controls: Run Synchronously, check the options: Run as the server, Trace launch, Use method server, Launch directly. If your code is taking considerable time to run, increase the timeout values.

b) Using Documentum Administrator
Open DA and go to Job Management->Methods node level. From menubar select File->New->Method, enter required values and save the method.

c) Using DQL:
create dm_method object
set object_name = 'MyCustomMethod',
set launch_async = false, set launch_direct = true, set method_type = 'java',
set method_verb = 'com.documentum.project.method.SomeCustomMethod',
set run_as_server = true, set timeout_default = 600,
set timeout_min = 300, set timeout_max = 3000, set trace_launch = true,
set use_method_content = false, set use_method_server = true;


3. dm_job object
a) Using Composer
In Composer create a Job artifact, enter job name, Subject, select the Method (created at step 2), set schedule options. In Arguments section you can add custom arguments passed to your method implementation class. Keep in mind that even if you use custom arguments, you should also pass the standard arguments (docbase name, installation owner, job id, etc.) which are used to manage the job. Thus, in order to add custom arguments select Custom Arguments, add desired arguments with values, then switch back to Standard Arguments. Save the job artifact.

b) Using Documentum Administrator
Open DA and go to Job Management->Jobs node level. From menubar select File->New->Job, enter required values and save the job.

c) Using DQL:
create dm_job object
set object_name = 'MyCustomJob',
set title = 'Title', set subject = 'Job description',
set pass_standard_arguments = true, set is_inactive = false,
set start_date = DATE('01/01/2013'), 

set expiration_date = DATE('01/01/2020'), 
set a_next_invocation = DATE('02/01/2013'), set max_iterations = 0,
set method_name = 'MyCustomMethod', set run_mode = 3,
set run_interval = 1, set inactivate_after_failure = true;


Now you have all the required items for you job. Build the composer project, install it into the repository (from Composer, right click the project and choose Install Documentum Project... or take the built DAR from bin-dir folder and install it using DarDeployer/DarInstaller). If you're using BOF 1.0, deploy manually the implementation jar on JMS (usualy in [JBOSS]\server\DctmServer_MethodServer\deploy\ServerApps.ear\DmMethods.war\WEB-INF\lib).
If it's not the first deployment of the DAR, it's better to restart the JMS and clear the BOF cache (mandatory in case of BOF 1.0).

It's done! Go on with testing: run the new job and check the job report and JMS logs.

Tuesday, April 9, 2013

How to stop running job

A Documentum job consists of a dm_job object, containing job configuration, name of method to run, as well as process execution information. The job method is executed on the Content Server (if it's a java method - on the JMS). Once the job is started, there's no 'soft' approach to stop it, you can only force it.
Below are the steps to stop properly a running job, reseting also the dm_job object:
1. Get the process id, by running the DQL:
select a_last_process_id from dm_job where object_name='[JOB_NAME]'
2. Using Process Explorer (or similar tool) locate the process by id and kill it. (Or with a command like: taskkill /pid 1234)
3. Run the API command to unlock the job object:
unlock,c,[job_id]
4. Execute the following DQL queries to clear job execution-related attributes:
a) Reset application that uses/locked the job
update dm_job objects set a_special_app='' where object_name='[JOB_NAME]';
b) Reset current status
update dm_job object set a_current_status='FAILED' where object_name='[JOB_NAME]';
c) Prevent job from being run immediately
update dm_job object set run_now=FALSE where object_name='[JOB_NAME]';

Remember to change the schedule or inactivate the job if you don't want it to run automatically after specified period.
If your job is locked in running state, check here how to unlock it: How to unlock a job in running state.

Tuesday, April 2, 2013

Renaming users - how dm_UserRename job works

In a previous article How to change the user_name of a dm_user we've seen how to rename a Documentum user manually (using Documentum Administrator) and by DFC code.
Let's assume we want to update a set of users, but can't or don't want to write DFC code (it also requires some time to write and run the code). We have an additional option: have the dm_UserRename job rename the users automatically. But how does the dm_UserRename job work?
This job polls a queue: dm_job_request objects. The object has the following attributes which are relevant for the job:
object_name         : UserRename
job_name            : dm_UserRename
method_name         : dm_UserRename
request_completed   : F
arguments_keys   [0]: OldUserName
                 [1]: NewUserName
                 [2]: report_only
                 [3]: unlock_locked_obj
arguments_values [0]: userA

                 [1]: userB
                 [2]: F
                 [3]: F


The jobs read the arguments_keys & arguments_values values and calls the dm_UserRename method with these arguments, so the specified user is renamed with selected options (report_only mode, unlock objects or leave locked). When the operation completes, the request_completed attributed is set to TRUE.
Thus in order to rename a set of users, you need to create a dm_job_request object for each user and run the dm_UserRename job. Note that by default the dm_UserRename job is inactive, so you can either run it manually, or activate it and schedule if you often rename users.

Here's sample DQL to create a dm_job_request object (to rename 1 user):
create dm_job_request object set object_name='UserRename', set job_name='dm_UserRename', set method_name='dm_UserRename', set request_completed=FALSE, append arguments_keys='OldUserName', append arguments_keys='NewUserName', append arguments_keys='report_only', append arguments_keys='unlock_locked_obj', append arguments_values='userA', append arguments_values='userB', append arguments_values=FALSE, append arguments_values=FALSE

How to change the user_name of a dm_user

In a Documentum repository users are uniquely identified by the user_name attribute. Being a key, the user_name value is used in related objects (for example: owner, r_modifier, r_creator, r_lock_owner, etc.) stating that this user performed an action on this object.
In certain cases we need to change the user_name value: for example when we want to create a new user, but an old one (perhaps deactivated) already exists with the same user_name.
Being a key, we can't simply update this value, you won't even be able to modify it on user property screen. Here are the ways to accomplish this:

1. Manual renaming
There's a simple tool called Reassign User which invokes the dm_UserRename job (Renaming users - how dm_UserRename job works explains how this job works):
a. (Optional) Create a new user with the user_name you want to rename the old user to.
b. Open DA, go to Users node in browsertree, find the desired user, right click on it and choose Reassign User. On the opened page enter/select the new user, select the option to run the reassign job now (and change other settings if required).
c. (Optional) As soon as the dm_RenameUser job has completed, you can remove the original (old) user if it's not needed anymore.
Note that you can skip the creation of a new user and just enter the new user_name on the Reassign User page, it will handle the modification of user_name itself.

2. Automatic renaming
If we want to rename many users, the manual approach would be really annoying, so as usual we turn to some DFC coding.
Renaming users in DFC is quite straightforward, it's enough to call the rename method:
 IDfUser user = session.getUser("userA");
 user.renameUser("userB", true, false, true);
 user.save();


Here's the method description from the API:
void renameUser(String userName,
                boolean isImmediate,
                boolean unlockObjects,
                boolean reportOnly)
                throws DfException

    Renames this user. If this is a new user, the attribute value is set. If this is an existing user, this method invokes a job to rename user.
    Parameters:
        userName - user name to be set or renamed to (existing user)
        isImmediate - Detemines if the job should be run immediately or run on schedule.
        unlockObjects - control if the objects should be unlocked when renaming user. If this is a new user, this parameter does not take any effect.
        reportOnly - control if the job created is to report or actual rename. If this is a new user, this parameter does not take any effect.

How to register and unregister types for indexing

Documentum uses fulltext indexing to enable search on all document properties as well as its contents.  How does fulltext indexing work?
Indexing is performed by a dedicated server (FAST, xPlore, etc.) that polls the queue of the fulltext user. In order to index certain (sub)types of documents, you must register certain events of these types for index user. Thus when a registered event occurs on a document of the registered type, a queue item is created in the fulltext user's queue, which consequently is processed by the Fulltext indexing server, by indexing the document. The events to register are those that usually change the object metadata and/or content: dm_save,dm_readonlysave,dm_checkin,dm_move_content,dm_destroy

By default, (in previous versions of Documentum) all dm_sysobject types and its subtypes are registered for indexing. Usually you don't need all the dm_sysobject to be indexed, so before registering your custom types, you should unregister the dm_sysobject first. You can do that by running the following API script, being connected with the fulltext user (dm_fulltext_index_user):
(Note: if you don't know the password for dm_fulltext_index_user you can generate a login ticket by running following API with a superuser: getlogin,c,dm_fulltext_index_user Read more details about this trick in: How to login to the repository with a ticket).

API for unregistering indexing:
unregister,c,[TYPE_ID],dm_save,,F
unregister,c,[TYPE_ID],dm_readonlysave,,F
unregister,c,[TYPE_ID],dm_checkin,,F
unregister,c,[TYPE_ID],dm_move_content,,F
unregister,c,[TYPE_ID],dm_destroy,,F

[TYPE_ID] is the r_object_id of the dm_type object
Note: If you have more Index servers, you should have an index user for each, so run the unregister script for each user.

You can also do that by unchecking the 'Enable Indexing' checkbox on dm_sysobject type properties in DA.

You can register a custom type for indexing in 2 ways:
1. By executing the following API script, with the fulltext user (dm_fulltext_index_user):
register,c,[TYPE_ID],dm_save,,F
register,c,[TYPE_ID],dm_readonlysave,,F
register,c,[TYPE_ID],dm_checkin,,F
register,c,[TYPE_ID],dm_move_content,,F
register,c,[TYPE_ID],dm_destroy,,F


[TYPE_ID] is the r_object_id of the dm_type object
Note: If you have more Index servers, you should have an index user for each, so run the register script for each user.

2. Manually, by using DA: Open Types node in browsertree, find the desired type, open its properties and check the 'Enable Indexing' checkbox.

To check which types are currently registered for indexing, use the DQL:
select name from dm_type where r_object_id in (select distinct registered_id from dmi_registry where user_name='dm_fulltext_index_user')

Friday, March 22, 2013

D2 - Configuring Lifecycles

D2 does not use Documentum standard Lifecycles (dm_policy objects), it comes with a new paradigm so we should be careful to details because it doesn't work as we are used to.
As the D2 interface and functionality is not very customizable, much of our custom logic (actions) can be attached to Lifecycle states. That's why it's important to know and exploit the potential of D2 Lifecycles.

A D2 Lifecycle is defined as any other configuration module, in D2-Config application, configurations being saved in a d2_lifecycle_config object.
After creating a new LC configuration, create the states, specify the first (start) state, select if actions should be executed on start and optionally an alternate attribute holding the current state (by default it's a_status). For each state you have the following configurations:
1. Entry condition: a condition that can be applied on a list of items:
 a) VD Children: checks target state of Virtual Document children (if current item is a VD)
 b) Checked by Method: you can specify a (custom) method that (can get some arguments) will perform required checks
 c) On group: Checks if current user is in the group specified either in free text (as a constant), either saved in a property, or part of a Dictionary.
 d) On linked document: checks if the linked document is in the specified state.
 e) On property: checks performed on the value of an attribute
 f) On rendition: checks existance of a rendition
 g) On uniqueness: runs an existing uniqueness check
 h) Permission: checks if current user has the specified minimal permission on the object.
For each option you can specify a message that will be displayed as a warning in case of condition is not met (and the object will not enter this state).

2. Actions: are executed when the object enters the current state. Possible action types:
 a) on VD children: applies the selected state on VD children
 b) Apply autolink, autonaming, parameters, security : applies the corresponding configuration(s) module(s) that is enabled for the active context.
 c) Apply method: runs the specified method (standard or custom) with optional arguments
 d) Change linked document state: changes the state of the linked documents (you must specify relation type and direction)
 e) Change other versions state: changes the state of the other versions of the current object
 f) Copy repeating property: copy the values from one repeating property to another (with modes: remove all existing, append, insert before)
 g) Make version: creates a new version (version type is selected) of the object
 h) Manage distribution: launch or stop the specified distribution configuration
 i) Send in Workflow: starts a Workflow with current object as attachment (select workflow configuration, name, notes)
 j) Send email: send emails according to the selected mailing list
 k) Other types: Mark, Unmark Set property, Set repeating property, Snapshot, Work offline, Remove other versions, Rendition request

NOTE: It's important to add an action 'Set property' for attribute a_status with value equal to current LC state, because D2 does not update it automatically when object enters a new state, but only on the first state (don't ask me why :\)
As you see the range of pre-configured actions is pretty big, but if none of it fits your requirements, you can always create a custom method and call it by using the action type 'Apply method'.

3. Next state: specify which is the next(or previous) state and the following settings:
 a) Type of transition: promote, demote, suspend, resume, restart
 b) Dialog box: a property sheet configuration, which will appear after you start transition, allowing you to view/update required attributes
 c) Action to perform (Checkin, Export file, Insert PDF): action that will be performed when transition occurs
 d) Trigger event for transition: the event that will automatically trigger transition to this state
 e) Menu label: Label that will be displayed on the transition menu item - this is mandatory, otherwise the menu item will not be displayed
 f) Confirmation message: A confirmation message that will appear in a popup forcing the user to confirm the transition
 g) Other settings: Electronic signature/Intention required, Intentions dictionary

NOTE: If you won't specify a menu label for state transition, the corresponding (context) menu item will not be displayed and you won't be able to transition to next state.

4. Transition condition: conditions that can be performed on different items (like the entry conditions). If the condition is not met, the specified message is displayed and transition does not occur.

Things to remember:
1. The extended permission 'Change State' on an object does not impact D2 Lifecycle transitions (unlike standard LC where this permission is required), so you must add this constraint manually in LC configuration if required.
2. For each LC state add an action 'Set property' for updating attribute a_status with new status (it's not done automatically)
3. For each next state specify the Menu label (otherwise the menu item for this transition will not appear in the LC menu)

D2 - Using dependent fields on property sheets

D2 offers 2 ways of defining dependent fields on property sheets: Taxonomies and DQLs.

1. Taxonomy

D2 Taxonomy configuration
To use a taxonomy, you must first configure (in D2-Config) N Dictionaries, then a taxonomy with N levels corresponding to N fields/attributes. For each level of the taxonomy specify the corresponding dictionary and desired values.


On the property sheet add the dependent fields (dropdownlists), then for each specify the type (Taxonomy), choose taxonomy name, the corresponding level and previous/next properties that must be updated when current field is changed. If no values are selected, the dependent fields will contain all the values from the corresponding Dictionary.
Here's a simple Country-City dependency:
D2 property attribute taxonomy typeD2 property sheet dependent attributes

2. DQL

D2 property sheet refresh dependent attributesOn the first (master) attribute configuration add the dependent attributes in the field: 'In case of modification, reinitialize the following controls'.


On dependent attributes select type: DQL, then enter the DQL:
select dependent_attr from some_type where '$value(master_attr)'='' or master_attr='$value(master_attr)'
D2 property sheet attribute DQL
If you don't want to populate the dependent attribute until the master attribute is selected/entered, exclude the DQL clause: '$value(master_attr)'=''

D2 Query Forms - add dynamic filters to the DQL

D2 client version 4.0 does not include an advanced search, only quicksearch and query forms being available. (Advanced search was added in version 4.1).
Query forms however are not very powerful, because you must set a static DQL query based on values entered or selected on the associated property page.
If we define a property sheet with 1-2 attributes which are always populated, the DQL is quite trivial.
But what if we want a more generic query form, with multiple fields (filters) which are optional? How to specify in a static DQL which filters should be used (and skip the empty ones)?
You can do that by using a query like this:
select * from custom_type where ('$value(attributeA)'='' or attributeA='$value(attributeA)') and ('$value(attributeB)'='' or attributeB='$value(attributeB)')...and ('$value(attributeN)'='' or attributeN='$value(attributeN)')

So if attributeA field from the propertysheet is empty, the clause '$value(attributeA)'='' will return true, the second part being meaningless; if it's not empty, the second part will be evaluated filtering results by the value entered for the attribute.
In a simmilar way, you can also use other operators: >, <, like, etc. (but keeping unchanged the first part - check for empty string).

Now let's see what happens with Date fields. We must convert the string value into a Date:
('$value(attrDate)'='' or attrDate>date('$value(attrDate)','mm/dd/yyyy'))

But if the string is empty, the resulting DQL clause will be:
(''='' or attrDate>date('','mm/dd/yyyy'))
and this will fail because empty string can't be converted into a Date.
There's a workaround for that: define and use default values (for example: $TODAY, $NOW) for Date attributes.

Thursday, March 21, 2013

D2 Installation Troubleshooting

During D2 installation there are some steps you should pay attention at, while others are not described in the installation guide.
Below you will find some notes and issues faced during installation of D2 product.
I Back-end installation on the Content Server
1. Stop all Documentum services before the instalations.
2. If you're upgrading D2, delete the preferences objects (DQL: delete d2c_preferences objects) and remove the poi*.jar files from ServerApps.ear lib folder.
3. While installing D2 libraries for the Content Server and JMS, enter a correct path to the ServerApps.ear lib folder on JMS (for Dctm 6.7 SP1 it does not contain APP-INF), as well as the path to the DFS SDK of same version as your CS.
4. Remove from the ServerApps.ear lib folder the following jars: itext.2.0.2.jar, xercesImpl-2.7.1.jar
(optional: if a recent patch has been installed for the CS, replace the dfc.jar from ServerApps.ear lib folder with the newer one)
5. Check and add (if not done by installer) the D2.jar file location to the CLASSPATH environment variable of the server hosting the CS.
6. Configure logback.xml file (which contains logging settings) from ServerApps.ear/[APP-INF/]classes folder. Note: In Document 6.7 SP1 the classes folder is created by the installer directly in ServerApps.ear folder, though it is normaly present in ServerApps.ear/APP-INF folder. Most probably it's a bug, so I suggest to copy the newly created xml files (after log configuration) from ServerApps.ear/classes to the ServerApps.ear/APP-INF/classes folder.
7. For DARs installation: if using Dctm 6.6 and later in the installation wizzard select 'Do not install DocApp/DAR', just select a path for extraction and consequently install the dars (D2.dar, D2Widget-DAR.dar) manually with DarDeployer. For version 4.1 install also CollaborationServices.dar (also on global repository if it's not the same one).
If your installation owner is not 'dmadmin', create an installparam file with following content:
<?xml version="1.0" encoding="UTF-8"?>
<installparam:InputFile xmi:version="2.0" xmlns:xmi="http://www.omg.org/XMI" xmlns:installparam="installparam">
    <parameter key="dmadmin" value="my_install_owner"/>
</installparam:InputFile>

Use this installparam file during DARs installation.

8. Create registered table for auditing:
register table dm_audittrail_s (event_name string(64), user_name string(32), time_stamp time, object_name string(255), string_1 string(200), string_2 string(200), string_3 string(200), string_4 string(200), string_5 string(200))
update dm_registered object set object_name = 'D2 Audits', set owner_table_permit = 1, set group_table_permit = 1, set world_table_permit = 1 where object_name = 'dm_audittrail_s';


II Front-end installation on Application Server
1. After extracting the webapps with the installer, configure the following files per application:
 a) D2-Config:
    - Path: D2-Config.war\WEB-INF\classes
    - Files: D2-Config.properties, dfc.properties, log4j.properties (add if missing), logback.xml
 b) D2-Client:
    - Path: D2-Client.war\WEB-INF\classes
    - Files: D2-Client.properties, dfc.properties, logback.xml
 c) D2FS (not present in 4.1):
    - Path: D2FS.war\WEB-INF\classes
    - Files: dfc.properties, logback.xml
 d) D2:
    - Path: D2.war\WEB-INF\classes
    - Files: applicationContext.xml, logback.xml (+ 4.1: logback-base.xml, D2FS.properties, dfc.properties)
 * Note: URL of D2FS in applicationContext.xml (not required for 4.1) must contain the host name, not localhost (because this URL also serves as download URL for files)
2. Widget View plugin configuration:
a) If you don't find Widget plugin jars in the package, launch the D2—Widget-Install.jar installer to deploy the jars to some location
b) Copy in \D2-Config\WEB-INF\lib the libraries: D2-Constants.jar, D2-Widget-API.jar (in addition D2-Widget.jar in case of patch 02)
c) For versions previous to 4.1: Copy D2-Widget-Plugin.jar to D2-Config\WEB-INF\classes\plugins
(Note: Weblogic does not locate correctly the plugin jar when relative path is specified, so you have to deploy the jar to external location and specify full path in D2-Config.properties, for example: D:\EMC\D2\plugins\D2-Widget-Plugin.jar)
d) For versions previous to 4.1: Add the path to the plugin in \D2-Config\WEB-INF\classes\D2-Config.properties: plugin_1=plugins/D2-Widget-Plugin.jar (or full path to jar location for Weblogic)
Note: In version 4.1 the plugins are included and configured by default in D2-Config (however if Weblogic does not find them, use an external path as shown above).

III Common issues

Issue 1 - Error while installing the DAR file: Owner object (user OR group) 'dmadmin' not found in the target repository
Solution:
As described above, create a file nodmadmin.installparam with the following contents:
<?xml version="1.0" encoding="UTF-8"?>
<installparam:InputFile xmi:version="2.0" xmlns:xmi="http://www.omg.org/XMI" xmlns:installparam="installparam">
    <parameter key="dmadmin" value="my_install_owner"/>
</installparam:InputFile>

On the DarDeployer interface, under DAR Details, click Browse next to Input File, and select the nodmadmin.installparam you've created.

Issue 2 - File transfer fails in D2
Solution: Check the D2FS URL in the \D2\WEB-INF\classes\applicationContext.xml - it should use the server name, not 'localhost'.

Issue 3 - Tomcat PermGen Space error
Solution: Add/modify the following Java option in your Tomcat environment: -XX:PermSize=256m -XX:MaxPermSize=512m

Issue 4 - D2-Config menus do not show all items (not sized correctly)
Solution: Configure Internet Explorer as follows:
  - Allow popup windows
  - Allow windows to resize by script without size or position constraints
  - Allow the browser to use tabbed browser settings when encountering a popup window

Documentum D2 Overview

Documentum D2 Overview...

Wednesday, February 20, 2013

How to create an id generator (synchronized incrementer)

Pretty often we need a module that generates an id (incremented number), which must be unique: for example auto-numbering modules, etc.
To ensure uniqueness of these ids, we must synchronize the calls in order to exclude possible duplicate values for concurent calls. But if we have a distributed environment, the module is executed in parallel, unsynchronized environments. How to solve this problem?
We won't have synchronization issues if we implement the incrementer at the Database level (which is only one even for multiple Content Server environment).
The most convenient way is to use sequences (for Oracle, SQL Server, etc., other DBMS should also have something similar).
Below you'll find a smart way to implement the incremented number generator. Only DQL queries are used (executing also some SQL commands):
1. Create the DB sequence
execute exec_sql with query = 'create sequence my_sequence start with 1';

2. Create a function get_next_code that will use the sequence and return the incremented value (as string)
execute exec_sql with query = 'create or replace function get_next_code return varchar2 as code number; begin select my_sequence.nextval into code from dual; return to_char(code); end;';

3. Create a view which will pass the incremented value
execute exec_sql with query = 'create or replace view my_counter (my_code) as (select get_next_code() from dual)';

4. Register table (view)
register table dm_dbo.my_counter (
  my_code   char(32)
);

update dm_registered object
set owner_table_permit = 1,
set group_table_permit = 1,
set world_table_permit = 1
where object_name = 'my_counter';

The id generator is ready. Now in order to get the next id, you just need to run the following DQL:
select my_code from dm_dbo.my_counter

Couldn't be easier, isn't it?

Monday, February 11, 2013

How to display a component without authentication in WDK

Sometimes you need to display a component without opening a session on the target repository: for example displaying a page telling the user does not meet the requirements to access the application, etc.
If you simply jump to your component, the internal filters will detect there's no session opened and will automatically forward the request to the login component (according to the authentication schemes defined). To avoid this, you must perform some configurations:
In the file ..\webtop\WEB-INF\classes\com\documentum\web\formext\Environment.properties
you will have more lines like this:
non_docbase_component.32=showtestresult

All you have to do is find the last line of this kind (with the biggest number, in this case 32) and add after it a similar line for your component, incrementing the last index:
non_docbase_component.33=mycomponent

Now your component will be displayed even before logging into any repository.