Sunday, June 16, 2013

D2 4.1 custom logo

D2 has a standard logo which cannot be changed from configuration at the moment (though it is planned to be so from version 4.2).
You can change the D2 logo in version 4.1, but it requires customizing the D2 webapp by modifying a D2 CSS file. You can change the logo both on the login frame and menu bar. Here is the procedure to replace the D2 logo:

1. Prepare your custom D2 logo: make it in PNG format, 70x50 pixels and transparent background
2. Copy your custom logo (in our case d2_custom.png) into location resources/com/emc/x3/client/resources/images/logo
3. To replace the D2 logo in the menu bar open the file resources/themes/slate/css/xtheme-x3.css and search for "img.x3_portal_logo" (usualy line 61). In the CSS class add the style part marked in bold:

img.x3-portal-logo {
background-color: transparent !important;
z-index: 2;
width: 80px;
/* CSS customization start */
background-image: url("/D2/resources/com/emc/x3/client/resources/images/logo/d2_custom.png");
background-repeat: no-repeat;
height: 0 !important;
overflow: hidden;
padding: 60px 0 0;
/* CSS customization end */

}


4. To replace the D2 logo on the login window, in the same css file search for ".x3-login-window-banner" (usualy line 473) and add a new style, the one marked in bold:

.x3-login-window-banner {
/* height: 50px; */
background: -moz-linear-gradient(center top , #707070, #000000) repeat scroll 0 0 transparent;
background: -webkit-gradient(linear, left top, left bottom, from(#707070), to(#000000));
filter: progid:DXImageTransform.Microsoft.Gradient(StartColorStr=#707070,EndColorStr=#000000,GradientType=0);
border-bottom: 2px solid #CCCCCC;
/* the following props are temporary until we get logo img */
padding: 0 0 0 10px;
}

/* CSS customization start */
.x3-login-window-banner img {
background-image: url("/D2/resources/com/emc/x3/client/resources/images/logo/d2_custom.png");
background-repeat: no-repeat;
height: 0 !important;
overflow: hidden;
padding: 60px 0 0;
width: 80px;
}
/* CSS customization end */


After you make the changes, restart the application server, cleaning the cache, clean your browser(s) cache, then check if your custom D2 logo is correctly displayed.

Wednesday, June 12, 2013

D2 back-end tweaking

D2 is a new EMC Documentum application which comes with a brand new concept and technology. It's really dynamic, user-friendly and fast. In order to not experience a downgrade of performance as time passes, we should tweak and fine tune its environment. In this article we'll focus on the back-end, Documentum side. There are several easy tunings that can optimize your Documentum D2 system:

1. Disable unused jobs
The following jobs can be inactivated when you don't use certain features:

a) If you don't use external tasks (sent/received via email) in your Workflows:
- D2JobWFReceiveTaskMail
- D2JobWFSendTaskMail

b) If you don't use Workflows at all:
- D2JobWFReceiveTaskMail
- D2JobWFSendTaskMail
- D2JobWFCleanerWorkflows
- D2JobWFLaunchScheduledWorkflows
- D2JobWFCleanPopHistory
- D2JobWFFollowUpTaskNotif
- D2JobWFWorkflowsNotifications
- D2JobDelegation

2. Review jobs schedule
Many D2 jobs are scheduled to run often (even every 5 minutes), so if some features are not used very often, or refresh rate is not very important, consider encreasing scheduled run rate.

3. Activate and schedule Administration Jobs
Ensure the following Documentum Administration jobs are active and run on a regular basis:
- dm_DMClean (removes deleted and orphaned objects from the docbase)
- dm_DMFilescan (removes deleted or orphaned content files from the file system)
- dm_LogPurge (removes server and session logs from the docbase and file system)
- dm_ConsistencyChecker (runs lots of integrity checks on the docbase)
- dm_UpdateStates (update database table statistics and repairs fragmented tables)
- dm_QueueMgt (deletes dequeued Inbox (dmi_queue_item) items from the docbase)

4. Disable/modify auditing when possible
D2 is pretty dynamic environment, objects are saved/modified many times, many jobs run and pretty often. This leads to a considerable amount of audit entries being created, which impacts the repository performance.
If you don't need auditing for all objects in the repository for the default set of events, remove the event dm_default_set from audit management. You can add this set and other events for custom types used by your applications, or even a specific set of events for custom types.
You can remove default auditing from Documentum Administrator, or by using unregsiter API command.

5. Review indexing configuration
If you have fulltext indexing enabled, check what types are configured to be indexed and try to narrow the count as possible (don't index global types like dm_sysobject, dm_document, etc.).

Monday, June 3, 2013

How to insert a node in browsertree in Webtop

Adding and modifiying Webtop (or other WDK application) browsertree nodes is done by customizing the browsertree component. To achieve this, you must do the following:
In Webtop custom layer, in config folder (or its subfolder, according to your structure) create a new file browsertree_component.xml (or find the existing one), which will override the brosertree component configuration. Here's a sample of how the configuration should look like:
<config version="1.0">
  <scope>
    <component modifies="browsertree:webtop/config/browsertree_component.xml">
      <insertafter path="nodes.docbasenodes.node[componentid=inboxclassic]">
        <node componentid="custom_node">
          <icon>customicon_16.gif</icon>
          <label>Custom Node</label>
        </node>
      </
insertafter>
      <!-- below optional parts -->
      <replace path="pages.start">
        <start>/custom/webtop/browsertree.jsp</start>
      </replace>
      <replace path="nlsbundle">
        <nlsbundle>com.package.custom.BrowserTreeNlsProp</nlsbundle>
      <replace>
    </component>
  </scope>
</config>


In this example we modify the browsertree component configuration from webtop layer by inserting a new node after the node inboxclassic. Our node will display the "custom_node" component, which must be a WDK component. We also replaced the browsertree standard jsp layout with out custom jsp. Finally, we've defined a custom NLS resource bundle for localizable strings.
Note that we've used definition modification (introduced in WDK 6.x), but the same result can be achieved by extending the component definition from parent layer (although it usually requires copying big xml parts from parent definition).

Friday, May 31, 2013

How to install DAR into Documentum repository

Documentum artifacts (jobs, methods, modules, lifecycles, custom types, etc.) are created in a Composer project and built into a DAR file, which consequently must be installed into the repository using DARDeployer (in previous versions called DARInstaller) or Composer Headless (using scripts).
Note: the Composer project artifacts can also be installed into the repository directly from Composer, by choosing Install Documentum Project... item in the context menu (right click on project): specify repository name, user name, password, domain (optional), press login, then install.

To install a DAR file, open DarInstaller (or DARDeployer in old versions) and fill the fields:
 - DAR: the DAR file to install (required)
 - Input File: the installation parameters file (file with custom installation parameters)
 - Locale Folder: Folder with localization
 - Log File: log file location (in case you don't want the default location)

Once filled these fields, select the docbroker host, enter a port number (default: 1489), press Connect to get the repositories list. Then select a repository from the list, enter user name (superuser), password and domain (optional). When ready, click the Install button. The installation process starts and once it's completed you will see a message telling the dar was installed successfully. In case of errors, you will be notified with a proper message (no changes take place as the DAR installation is done in a transaction). In this case check the logs and try to solve the issue (sometimes it might be a trivial connection lost with the CS), then reinstall the DAR file. Sometimes, before reinstalling the DAR, Data Dictionary re-publish could be necessary (using dm_DataDictionaryPublisher job or just with API command: publish_dd,c).

If you can't use DARDeployer (for ex. non-Windows OS: Linux, AIX, etc.), you can install the DAR using Composer Headless. For this, you must create 3 files:

1. darinstall (script file)
Here's a template of the script file (Linux):
#!/bin/sh

export JAVA_HOME=/opt/documentum/shared/java/1.6.0_17
export ECLIPSE=/opt/documentum/product/6.7/install/composer/ComposerHeadless
export WORKSPACE=/opt/project/darinstall-workspace
$JAVA_HOME/bin/java -Xms512m -Xmx1024m -cp $ECLIPSE/startup.jar org.eclipse.core.launcher.Main -clean -data $WORKSPACE -application org.eclipse.ant.core.antRunner -buildfile darinstall.xml


2. darinstall.xml (Ant buildfile)
<?xml version="1.0"?>
<project name="darinstall" default="install">
  <description>ant script to install a DAR file</description>
    <property file="darinstall.properties"/>
    <property name="dar.folder" value="." />
    <target name="install">
      <antcall target="_install.dar">
        <param name="dar.file" value="${dar.folder}/CustomTypes.dar" />
      </antcall>
      <!-- other dars, in order of dependency -->
    </target>
    <target name="_install.dar">
      <echo message="============================"/>
      <echo message="Installing DAR ${dar.file}"/>
      <echo message="============================"/>
      <emc.install dar="${dar.file}" docbase="${dctm.docbase}" username="${dctm.username}" password="${dctm.password}"/>
    </target>
</project>


3. darinstall.properties (configurations,credentials)
dctm.docbase=project_dev
dctm.username=dmadmin
dctm.password=dmadmin
dar.folder=/opt/project/darinstall


Now just launch the first script (darinstall), wait until it completes, then check installation logs. If you find no errors/warnings, it's done!

Friday, May 10, 2013

Group memberships updates reflected in delay

In a multiple Content Servers Documentum environment you can encounter the following issue: after you change a group memberships (add/delete users and groups) using one CS, the changes are not reflected when you connect to the repository with other Content Servers, for a period of time (up to several hours). So the Content Server that performs the memberships change reflects the changes immediately, while the others will have a delay, usually a big one.
Why is this happening?
That's because the caching mechanism the Content Servers are using. The group memberships are cached from the DB into Content Server memory, so the CS does not query the DB every time it needs this information. When you are connected with a CS and a group is updated, this CS takes care to update its cache accordingly for that group. But what if you have more CS for the same repository? They will not update their cache, because they don't know there was a change. That's why on the other CS the group change will not be reflected until they don't refresh their cache.
How to solve that?
There is a configuration in server.ini which the CS Installation Guide does not pay attention to: upd_last_chg_time_from_db . The CS Administration Guide provides the following description for the key:
Optional key. Specifies that all Content Servers in a clustered environment have timely access to all changes in group membership. Valid values are: * TRUE: Should only be used for environments with multiple Content Servers. The value should be set to TRUE for all running Content Servers. * FALSE: The default value.
So the solution is to modify the server.ini files on all Content Severs setting this key value to TRUE:
# A boolean flag specifying whether user entries and groups list are synchronized from db.
upd_last_chg_time_from_db=T

Wednesday, May 8, 2013

How to test BOF (TBO/SBO) code changes without re-deployment

Documentum Business Object Framework (BOF) version 2.0 implies deploying of all modules implementation into the repository, as jar artifacts. These jars are downloaded and cached on demand, when the corresponding module is being called.
The need of re-deployment of BOF modules (TBOs or SBOs) each time we make a change in the code might cause important delays in the development process: recompiling composer project, installing it, restarting JMS each time consumes a lot of time.
Is there a way to test the BOF code fast, without redeploying it? Yes, it is!
You can cheat the DFC client and replace the cached jars with a newer versions. Here's how you can do it.
After having called the BOF module (to ensure it's cached), go to the BOF cache folder: its location on the client machine can be specified in dfc.properties file, in the dfc.cache.dir property (default is cache subdirectory of dfc.data.dir). Inside you'll find a folder structure like this: [DFC_VERSION]/bof/[DOCBASE_NAME]. In the docbase folder you'll find more jars having the name [DMC_JAR_OBJECT_ID].jar. You have to locate the one containing your BOF code. For this, you can run the DQL command:
select r_object_id from dmc_jar where object_name='[JAR NAME]'
Or, you could also change the property dfc.bof.cache.append_name property so that the cached jars names will contain also their object names (on next caching).

Once you locate your jar, you have 2 options:
a) replace the cached jar with the new one, built in your IDE, keeping original name
b) open the jar (like a common archive) and replace only the modified class(es)
Remember to stop your DFC client to unlock the cached jars, otherwise you can't modify them.
Now start your DFC client and test your changes.

Code fast, test faster!

Tuesday, May 7, 2013

Custom job method DFC code sample

Implementing a custom Documentum job method requires writing some non trivial DFC code. Below you'll find a sample using an abstract class which can be extended and re-used for any implementation of a job method.
Connection/helping methods are encapsulated into the AbstractJobMethod abstract class, so you can write custom classes that extend it. The only method to be implemented is execute(), where you have an active session (handled by the abstract class) and can perform the required actions against the repository.
The abstract class uses the CustomJobArguments, which is an extension of standard DfStandardJobArguments, providing some additional methods to access the passed job arguments.
Check How to create a custom job in Documentum for details on creation of custom job and method.

public class SomeCustomMethod extends AbstractJobMethod {
  public int execute() throws Exception {
    DfLogger.debug(this, "Started custom method implementation with user: " + session.getLoginUserName(), null, null);
    // custom method logic goes here
    return 0;
  }
}
////
public abstract class AbstractJobMethod implements IDfMethod, IDfModule {
// for BOF 1.0 the class must implement only the IDfMethod interface, the jar is deployed on JMS
// for BOF 2.0 it implements also IDfModule because it will be deployed into the repository and will run as a BOF module

  protected CustomJobArguments jobArguments;
  protected IDfTime startDate;
  protected IReportWriter reportWriter;
  protected IDfSession session;
  protected IDfSessionManager sessionManager;
  protected IDfClientX clientx;
  protected IDfClient client;

  public int execute(Map args, PrintWriter writer) throws Exception {
    setup(args, writer);
    try {
      int retCode = execute();
      printJobStatusReport(retCode);
      return retCode;
    } catch (Exception e) {
      DfLogger.error(this, "", null, e);
      reportWriter.emitLine("Error encountered: " + e.getMessage());
      reportWriter.closeOut(false);
      throw e;
    } finally {
      if (reportWriter != null)
      reportWriter.close();
      if (session != null)
        sessionManager.release(session);
    }
  }

  // the only method to be implemented in concrete subclasses
  public abstract int execute() throws Exception;

  public void setup(Map args, PrintWriter writer) throws DfMethodArgumentException, DfException, Exception {
    startDate = new DfTime();
    jobArguments = new CustomJobArguments(new DfMethodArgumentManager(args));
    setupMethodFactory(writer);
    String username = jobArguments.getUserName();
    String password = jobArguments.getString("password");
    String domain = jobArguments.getString("domain");
    String docbase = jobArguments.getDocbaseName();
    setupSessionManager(username, password, domain, docbase);
  }

  private void setupSessionManager(String username, String password, String domain, String docbase) throws DfServiceException, DfException {
    DfLogger.debug(this, String.format("setupSessionManager-> username[%s] password[%s] domain[%s] docbase[%s]", username, password, domain, docbase), null, null);
    clientx = new DfClientX();
    client = clientx.getLocalClient();
    sessionManager = client.newSessionManager();
    IDfLoginInfo loginInfoObj = clientx.getLoginInfo();
    loginInfoObj.setUser(username);
    if (password != null && !password.equals(""))
        loginInfoObj.setPassword(password);
    loginInfoObj.setPassword(password);
    loginInfoObj.setDomain(domain);
    sessionManager.setIdentity(docbase, loginInfoObj);
    session = sessionManager.getSession(docbase);
  }

  private void setupMethodFactory(PrintWriter writer) throws Exception {
    try {
      IDfId jobId = jobArguments.getJobId();
      ReportFactory reportfactory = new ReportFactory();
      if (writer != null) {
        DfLogger.debug(this, "uso reportFactory", null, null);
        reportWriter = reportfactory.getReport(jobArguments.getDocbaseName(), jobArguments.getUserName(), "", jobArguments.getMethodTraceLevel(), jobId, writer);
      } else {
        DfLogger.warn(this, "writer == null, uso SimpleReportWriter", null, null);
        reportWriter = new SimpleReportWriter();
      }
    } catch (Exception e) {
      DfLogger.error(this, "Failed to create report writer. Error: " + e.getMessage(), null, e);
      throw e;
    }
  }

  private void printJobStatusReport(int retCode) throws Exception {
    reportWriter.emitLineToReport("Return Code-> " + retCode);
    String jobStatus = null;
    IDfTime end_date = new DfTime();
    long min_duration = Utility.timeDiff(startDate, end_date) / 60L;
    if (retCode == 0)
      jobStatus = "Custom Job completed at " + end_date.asString("yyyy/mm/dd hh:mi:ss") + ". Total duration: " + min_duration + " minutes.";
    else if (retCode > 0)
      jobStatus = "Custom job completed with Warnings at " + end_date.asString("yyyy/mm/dd hh:mi:ss") + ". Total duration: " + min_duration + " minutes.";
    else
      jobStatus = "Custom job completed with Errors at " + end_date.asString("yyyy/mm/dd hh:mi:ss") + ". Total duration: " + min_duration + " minutes. Check job report for details.";
    updateJobStatus(jobStatus, jobArguments.getJobId());
    reportWriter.closeOut(retCode >= 0);
  }

  public void updateJobStatus(String sJobStatus, IDfId jobId) throws Exception {
    if (session == null) {
      DfLogger.error(this, "setJobStatus: (session==null)", null, null);
      throw new NullPointerException("setJobStatus: (session==null)");
    }
    try {
      IDfPersistentObject job = session.getObject(jobId);
      if (job == null)
        throw new DfException("Failed to retrieve dm_job object from id '" + jobId.getId() + "'.");
      job.setString("a_current_status", sJobStatus);
      job.save();
    } catch (Exception e) {
      throw e;
    }
  }
}
////
public class CustomJobArguments extends DfStandardJobArguments {

  protected IDfMethodArgumentManager methodArgumentManager;
 
  public CustomJobArguments(IDfMethodArgumentManager manager) throws DfMethodArgumentException {
    super(manager);
    methodArgumentManager=manager;
  }

  public String getString(String paramName) throws DfMethodArgumentException {
    return methodArgumentManager.getString(paramName);
  }

  public int getInt(String paramName) throws DfMethodArgumentException {
    return methodArgumentManager.getInt(paramName).intValue();
  }
}