NCASI Statistics and Model Development Group 1
Date: Version 3 January 2, 2006
Habplan was declared to be at version 3 in early 2005 after the addition of several new features. Habplan now has the ability to write out MPS files for input to a Linear Program solver. It is also possible to create management units. Polygons that are placed in the same unit must be assigned the same management regime. Another new feature enables linking of Flow components. This allows a user to control some key variables for multiple Flows by making adjustments on a single Flow edit form. Version 3 of Habplan is backward compatible with version 2. This means that anything you did with version 2 will still work, and you can ignore the new features without penalty.
NCASI Statistics and Model Development Group 1
Date: Version 3 January 2, 2006
Habplan uses a simulation approach based on the Metropolis Algorithm. It does not use simulated annealing or genetic algorithms, but is closely related. Habplan is a random (feasible) schedule generator. It keeps running and generating alterations to the previous schedule for as long as you like. As the Metropolis iterations proceed, objective function weights are adaptively determined. This enables Habplan to meet the user's goals relative to each component in the objective function. For example, the user specifies whether Flows should be level, decreasing, or increasing as well as how much year-to-year deviation is allowed in the flow. The user can also specify minimum and maximum blocksizes along with green-up windows. Flow and Blocksize are just 2 of the objective function components. Other components are described below.
Habplan should run under any operating system that has a Java Virtual Machine, which includes, Windows , Solaris, Linux, and Macs. Installation is easy.
To run the program, you get started by opening a form (from the edit menu) for an objective function component that you're interested in. Fill it out and then check the box on the main Habplan form. This will cause habplan to read the component's data. Assuming the data are OK, you are ready to work on filling out another component form, or to press start to initiate the scheduling algorithm. Use the save option in the File menu so you don't have to fill the forms out again. Note that the SUSPEND button suspends the run at the end of the current iteration, and it can be continued where it left off by pressing the START button. STOP will stop the run and a subsequent START begins with a new random starting schedule.
Information about objective function components is given in more detail further along, but here's a quick overview of currently available components:
Detailed information on each component follows:
All commercial uses of HABPLAN by individuals and organizations other than NCASI member companies are strictly prohibited unless authorized in advance in writing by NCASI. Commercial uses include, but are not limited to (a) any use of HABPLAN that has any substantial effect on private forest management; (b) any use of HABPLAN in consulting or other commercial service activities conducted for public-sector or private-sector clients, and (c) any sale of software or software-related services based directly or indirectly on HABPLAN.
Internal use of HABPLAN by NCASI member companies is unlimited. Member companies that wish to use HABPLAN for external commercial purposes must pay a supplemental annual fee (in addition to dues). External commercial purposes include, but are not limited to: (a) any use of HABPLAN in consulting or other commercial service activities conducted for public-sector or external private-sector clients, and (b) any sale of software or software-related services based directly or indirectly on HABPLAN. The supplemental fee for external use of HABPLAN for the year 2004 is $5,000. This fee may be adjusted at the start of each NCASI fiscal year (April 1). If a NCASI member company decides to terminate its membership in NCASI, any and all rights to commercial use of HABPLAN will terminate simultaneously with termination of membership.
Habplan is a standalone program. Java programs are often run as Applets in a web browser. However, Habplan requires access to system resources like reading and writing files, which is considered a security violation for Applets. Habplan has been tested on a Sun SparcStation running Solaris and on a Windows NT and XP . However, it should run anywhere that Java runs.
Installing the program involves picking a starting directory where you put the habplan3.zip file, and then unzipping it. It is suggested that you create a directory called ncasi. Then you can unzip Habplan, Habgen, and or Habread inside of the ncasi directory.
After unzipping habplan3.zip, you have a directory called Habplan3. On a Windows system, you can create a shortcut to the file h.bat that is in the Habplan3 directory. Do this by right clicking on open space on your screen and selecting new - shortcut. Then browse to the Habplan3/h.bat file and select it. You should be able to double click on the new shortcut to start Habplan.
As an alternative to the shortcut approach, type the following:
After a few seconds, Habplan should appear on your screen. If nothing happens, try executing like this: "java -classpath . Habplan3", where the "." tells java to look in the current directory.
Note that the java interpreter allocates 16 MB of RAM by default. To run big scheduling problems, get more RAM by starting Habplan like this: java -mx256m Habplan3. This would allow for 256 MegaBytes.
For Linux and unix systems, make the lp_solve file in the Habplan3/LP directory executable. The following commands should work: cd YOURPATH/Habplan3/LP; chmod u+x lp_solve.
There is no one computer program in the world that can account for all variables in nature. Therefore, it is important to keep in mind that harvest scheduling is merely man's best effort at simplifying a very complex and dynamic natural phenomenon into a mathematical formula, and does by no means offer the perfect solution in the quest for the optimal management regime. However, it is safe to say that various harvest scheduling methods are capable of providing fairly reliable guidelines, by which land can be managed.
Habplan uses a Model I formulation. The primary advantage of using a Model I rather than a Model II formulation is the ability to track all management units throughout their existence, which is a requirement when spatial constraints are included in a harvest scheduling problem.
Linear programming is a widely used mathematical programming tool for computing optimal solutions to problems involving the allocation of scarce resources. This optimization algorithm seeks to improve on simulation outputs by sorting through harvest schedules to produce elevated combinations of objectives. This is done through the ranking of possible harvest schedules using an objective function. LP was the first optimization method applied to maximize harvestable volumes or Net Present Values (NPV), and has thus been around for many years. The primary shortcoming of LP, however, is its limited ability to account for spatial aspects of harvest scheduling. Thus, what LP suggests to be the optimal solution usually turns out to be impossible in the real world.
With increasing emphasis placed on spatial concerns such as fragmentation and patch size in forest management, and the continued introduction of new spatially-oriented environmental and social constraints, spatial simulation techniques continue to be developed. These simulation algorithms mimic processes in harvest scheduling, seeking ultimately to arrive at the same outcome that would occur had the situation been played out in real life. Thus, these simulation algorithms do not offer one optimal solution, as does LP, but rather, in principal, they compute a range of harvest schedules that are all feasible. At this stage, we are not aware of any available simulation-based harvest-scheduling packages that report multiple feasible solutions. Other harvest scheduling packages seem to simply converge on the ``best'' solution. Habplan, however, does have the capability to report multiple feasible solutions. Although these simulation techniques may not be capable of finding the perfectly optimal solution, they are capable of finding near-optimal solutions, sometimes within a few percent of the optimal.
Habplan uses a statistical simulation approach based on the Metropolis algorithm. It does, however, also have a LP capability, which is useful in that it allows the user to compare the Metropolis algorithm solution to the non-spatial optimal solution.
- How many iterations should the program try before it stops The
is used to open or close objective function component forms. The
allows you to open graphs for components that have associated graphs, but graphs are available only when a component is added to the objective function.
The checkboxes allow you to add components to the objective function. The unit checkbox allows you to enforce management units.
There are also pull down menus. The
has choices to: Open or Save settings from the component forms to a file. Output lets you periodically save results about the current schedule and each Flow and Block component to a file.
is used to open or close objective function component forms and the Management Unit form. The
allows you to open graphs for components that have associated graphs, but graphs are available only when a component is added to the objective function.
allows you to open the remote control window to run Habplan simultaneously on multiple computers.
It also lets you open a fitness function window to specify how the best schedule is determined.
(Misc) controls things like the license and sound. It also has an option to reconfigure Habplan to show different objective function components, which is a very important feature.
(Help) provides some immediate help text. However, the on-line or pdf version of the manual is likely to be the most up to date reference.
The file format is very simple. Column 1 gives the polygon id and column 2 gives the management unit id.
|Polygon ID||Unit ID|
Column 1 includes all the polygons that need to be assigned to a management unit. Column 1 polygon id's should be already known to Habplan from reading flow data or data for some objective function component. Any polygons that aren't in the management unit file will become the sole member of a one polygon management unit. If a polygon appears more than once, it will be ignored after the first appearance. If no management unit file is supplied, then each polygon becomes a 1 member management unit. This is how Habplan originally worked.
All polygons in a management unit will be assigned the same management regime. Supose polygons 1 and 2 are in the same unit. Polygon 1 has regimes A,B,C and D in the Flow and Bio2 component files. Polygon 2 has regimes A,B, and C. Since regime D is not allowed for Polygon 2, then it won't be allowed for polygon 1, because its in the same management unit with polygon 2. You need to keep this in mind when assigning polygons to units.
You might also want to break multi-part stands into individual polygons and then assign those polygons to the same unit. Multi-part stands become a problem when one wants to control block sizes and other spatial patterns. For example, consider a 2 polygon stand, where each polygon is 100 acres. Polygon A has 2 neighbors and polygon B has 3 neighbors, but they don't share any neighbors. If this is treated as a single stand, then it is a 200 acre stand with 5 neighbors. This becomes unnecessarily restrictive when trying to control blocksize. When the stand is split into its component polygons, this problem goes away. Putting the component polygons into the same unit forces them to get the same management regime.
OBJ = F1 + F2 + F3 ...
Component F1 would require a file giving the full information for the flow involved, as would components F2 and F3. The implication here is that a management option applied to a polygon yields multiple outputs of different kinds and possibly at different dates. The usual output considered in harvest scheduling is wood by weight, volume, or present net value. Presumably, you might want a flow term for each of these. However, if there is only 1 year of output for each regime, this situation could be more efficiently handled with CCFlow terms if the outputs occur in the same year. This way, you don't carry the memory overhead of reading in the larger Flow files for each component. (Habplan holds everything in memory).
Another use for multiple Flows might be where there are intermediate operations, like thinning, that occur at different years from the principal flows. In this case, you might want an extra flow term to control thinning outputs, or habitat creation efforts. Such outputs might be specified in terms of wood, costs, area, or sediments.
Multiple flows are also used for multi-district scheduling. Suppose F1 and F2 are the flows for district1 and district2. Then F1 and F2 read their data from a file. F3 would represent the regional level flow and doesn't require it's own file, instead you type F1; F2 in place of the file name on the F3 entry form to indicate that F3 "owns" F1 and F2. Note that there is no limit to the hierarchy that can be created, e.g. districts can have sub-districts, which can have sub-sub-districts. Only the lowest level flows in the hierarchy will actually read data, since higher levels get their data from the sub-flows that they own. Likewise, the lowest level flows can each have their own block and CCFlow components, but higher level flows (like F1) can't. The rule of thumb here is that any flow that directly reads data can have associated Block and CCFlow components. A flow component that gets its data from sub-Flow components can't have a Block or CCFlow component. A block file for multi-district scheduling can only contain the polygons that belong in the sub-district.
Bio-2 and Spatial model components can be used within the context of multi-district scheduling. However, Bio-2 and SMOD components must read data that contains all of the polygons from each sub-district. Bio-2 and SMOD must be viewed as global components that apply to all districts. Contact NCASI if you would like an example data set to evaluate multi-district scheduling capabilities.
Suppose you check the F2 box uder the ``Link'' menu for F1. This means that any adjustments you make to sliders on F1 will occur simultaneously on F2. You can link as many Flows as you like. This may be useful for runs that have lots of Flow components where some of them are related. You can not link F2 to F1 and also link F1 to F2. This is a circularity that would create problems.
Note that when you link F2 from F1, then slider changes on F1 effect F2. However, slider changes on F2 will not effect F1.
Each flow component requires a data set. For each polygon, there is one row for each regime that is allowed. A disallowed regime is simply not included in the data for the polygon, which prevents that regime from ever being assigned to that polygon. For example, maybe you can't allow clearcutting for polygon 10 because it's near a stream. Be careful with multiple flows that each flow dataset contains all allowed regimes, even if some of the regimes produce 0 output for some of the flows.
A simple example of some flow data follows. Note that outputs are
polygon totals, NOT per-acre or per-hectare. For regimes with multiple
output years, the format calls for entering the years and then the
outputs, so if polygon 2 had 2 years of output for option 3, you have:
. For the data depicted below, polygon 1 can only be assigned option 16, while polygon 2 could have options 1-8.
|Poly ID||Regime ID||Year||Output|
Also, the output is an integer - it takes much less memory to store integers than floating points.
Option 16 is a do-nothing option in the above dataset. Do nothing options are denoted with output=0 and optionally with year=0 . Year=0 indicates that this period of this option does not contribute to blocksizes, and this will be auto-detected by the blocksize component for this flow. Finally, remember that regimes can have variable numbers of output years. For example, regime 1 may produce output in years 1 and 21, while regime 20 only produces output in year 20. There is no requirement for the data input file to be rectangular for flow components. If you like rectangular files, however, you could create extra dummy periods for your regimes with year=0. For example, suppose regime 2 has a second dummy period for polygon 1 as follows:
The first period has an output of 120 for year 1 and the second period has output 0 in year 0.
Habplan provides two ways to do this:
The first method requires you to designate the early years of the planning horizon as ``byGone'' on the Habplan Flow Edit Form. The regimes that were actually applied are supplied for the stands in the byGone years. Habplan makes no effort to schedule things in these byGone years (2), since they are in fact already scheduled.
The second approach may often be easier to implement and Habread can help you. The idea is to create a Flow dataset that has years when precutting occurred designated by -1=''1 year before the start of the planning horizon'', -2=''2 years before'', etc.
This is what (Figure 1) the Flow data might look like when precutting is implemented.
You only need to use this feature for Flow components that have a Block subcomponent, otherwise its a waste of time. The block component will recognize the negative years and will incorporate the precut stands into blocks where appropriate. For example, suppose the planning horizon starts in year 1, and the green-up window is 3 years. Then a stand that was precut in year 0 will have a -1 year designation in the first year column of the flow data. This same stand will contribute to blocksizes for any neighbors that are cut in years 1, 2 or 3. Likewise, a stand that was cut 3 years before year 1 is designated with -3 in the Year column, and would only effect blocksizes of neighbors cut in year 1.
A final thing to consider is the effect of precutting on the number of actions that occur for a regime. Precutting simply adds an extra action time to the regime. For example, if the regime would normally have one action time that represents clearcutting, it would have 2 actions when precutting is designated.
You fill out a flowform to control the behavior of a flow component. After entering the file name, you have:
Habplan doesn't worry about trying to follow the target for byGone years, since it assumes it must assign the single regime that you include in the input data file for polygons managed in these years. A useful concept: if you don't want Habplan to worry about certain years being on target, you could declare them as byGone even if they really aren't!! However, this would mean that blocksizes for these years would be allowed to go out of bounds also.
There is another way to handle precut stands described elsewhere 7.2.1.
You can also have a mix of specified and smoothed targets. For example, if you want the flow for year 10 to be 100000, just specify 10,100000; in the flowform textarea. (Check the noModel box to get to this textarea.) This will set the year 10 target to 100000, but use the smoothing model for the other years. By specifically putting values for each year, you can create any desired flow. However, try the internal smoothing model first before bothering to enter specific targets.
The flow component also allows for multi-district scheduling. The idea here is that flows can have subFlows and subFlows can have sub-subFlows etc. Suppose that F3 has F1 and F2 as subflows. Indicate this in the file/subFlows textfield as follows: F1;F2; on FlowForm 3. The F1 and F2 file/subFlow field should have file names, because they would read data, and F3 would use their data to create a superFlow. This allows you to control the subFlows, F1 and F2, to look at their effect on the superFlow, F3. Conversely, you can control F3 and look at what happens to F1 and F2. This feature should be useful whenever you have subregions that need to be tracked separately.
You might be able to use a flow component to control the amount of inventory at the end of the planning period, if you are clever. For example, you could create a flow component where the input data gives the age of the stand at the end of the planning period that would result from each management regime. The output associated with this age could be the acres in the stand. Then the resulting flow graph will give the acres by age-class at the end of the period.
OBJ = F1 + C1(1) + F2+ C2(1) + F3
You'd have to use the config option to get the extra CCFlow components. In order for this model to work, you'd give each CCFLow component the acreage of the stands within it's respective district. Note that super-Flow components, i.e. flows that have sub-flows, can't have CCFlow or Block components.
The ccFlow graphs show (1) ccFlow versus the target, and (2) the Flow/ccFlow ratio. This might typically be the volume per acre ratio if Flow is volume and ccFlow is controlling acres cut.
The CCFlow data format is very simple. There is 1 row for each polygon, unless you want the polygon's CCFlow value to default to 0. There are just 2 columns containing: the polygon id, and the CCFlow value. Usually the value is polygon size. Note that this could have some limitations when your regimes have multi-year outputs. The single CCFlow value must apply over the multiple years. The assumption is that polygon size remains constant, and there might be other variables that are constant as well. However, for schedules involving only single year regimes, this limitation doesn't apply. To make it clear, here's what CCFlow data might look like for polygons 1-3:
This component can use the same file that the BlockSize component uses. It will ignore the extra information required by the blocksize component. The CCFlow component is convenient when it meets your needs. However, for multi-year regimes, you may need to create another Flow component to get the job done.
First, enter the file name. Then fill out the following fields on the ccFlow form:
You can also indicate a regime prefix,which is automatically expanded. For example, if your clearcut regimes all begin with ``CC'', then put ``)CC'' into the CCFLOW form text field. Don't include ``, but the ``)'' indicates that this is a prefix. Suppose you have regimes that begin with ``CT'' to mean clearcut as the first action and thinning as the second action. Indicate this to Habplan as )CT@1, which says to include the first period of all regimes that begin with CT.
Any combinations of this entry notation are allowed, so )CC;PT@1; gets all regimes that begin with CC, and the first period of regime PT.
Note that this system only works when your regime names have an intrinsic meaning that applies across all stands. Clicking on the window will cause the CC regimes that Habplan is using to be listed for you. You may need to hold the mouse button down while entering the CC regimes to prevent listing from occurring.
This component controls minimum and maximum blocksize. However, it also provides information on average blocksize, which can be indirectly controlled. Blocksize is a subcomponent of flow, since it needs to know when the flows occur to compute blocksizes. A block is defined by a target polygon and all of its neighbors that were subjected to a "block" treatment within a specified time window. Typically the block treatment is clearcutting and the time window allows for 'green-up' to occur. This component can also be used to enforce cutting limits within any block of stands by making each stand in the block a neighbor of every other stand in the block. For this special use, only first order neighbors need to be considered in the block size computations. By definition, block sizes must be re-evaluated annually. At any given block, some stands will grow out of the block and new ones may enter each year.
A parent Flow component, say F1, can have any number of dependent block components, BK1(1), ... , BKn(1). You might want to control maximum cluster size of more than one kind of treatment. Maybe you don't want too much old-growth forest in one location, or too much thinning. You might also want to have different blocksizes for different green-up periods. If you never want to cut more than 100 acres or less than 20 in a given year, for example, have a BK component with a 0 year green-up to specify this. Computing blocksizes is computationally demanding, so expect the program to run slower when a blocksize component is in the model. A typical objective function with 1 component each for flow, ccFlow, and blocksize looks like this:
OBJ = F1 + C1(1) + BK1(1)
Use the config option if you want more components.
Let's look at some data. The first 2 columns are polygon id#, and size. The data can have one or more neighbors on each line and use one or more lines per polygon. You can pad the file with 0's to make it rectangular if you like, and Habplan knows to ignore them (notice the entry for polygon #8). The data below show that polygon 1 has size=124 and 2 neighbors, which are polygons 2 and 4. This is the 1 row per polygon format. Fortunately, the data show that polygon 2 and 4 also consider polygon 1 to be a neighbor. Habplan doesn't look for logical consistency within this file, that's your job.
1 124 2 4
2 3 1 3
3 31 2
4 65 1
5 145 6
6 74 5 7
7 43 6
8 33 9 414 0 0 0 0
Here is the same input data in a one neighbor per line format.
1 124 2
1 124 4
2 3 1
2 3 3
3 31 2
4 65 1
5 145 6
6 74 5
6 74 7
7 43 6
8 33 9
8 33 414
You can indicate a regime prefix,which is automatically expanded. For example, if your notblock regimes all begin with ``T'', then put ``)T'' into the Block form. The ``)'' indicates that this is a prefix. Suppose you have regimes that begin with ``CT'' to mean clearcut as the first action and thinning as the second action. Indicate this to Habplan as )CT@2, which says period 2 of CT regimes is notBlock. Any combinations of this entry notation are allowed, so )T;PC@1; gets all regimes that begin with T, and the first period of regime PC. Finally, )CTT@2@3 would get periods 2 and 3 of CTT regimes, and )CTCT@2@4 would get periods 2 and 4 of CTCT regimes.
Note that this system only works when your regime names have an intrinsic meaning that applies across all stands. Clicking on the window will cause the notBlock regimes that Habplan is using to be listed for you. You may need to hold the mouse button down while entering the notBlock regimes to prevent listing from occurring.
A regime with output year=0 is a convenient way to specify a do-nothing option in the parent FLOW data. Habplan will automatically detect year 0 options and treat them as notBlock options. Click on the notBlock textArea after reading the data to see what notBlock options were auto-detected. This will also give a list of any notBlock options not found in the parent Flow Data. Make sure these are not typos. Sometimes you may have a valid notBlock option that was never allowed for any of the polygons. In this case it is OK to include it as a notBlock, even though it will not occur with the current dataset. It may, however, occur in a future dataset.
Finally, you can put notBlock options in a dataset, rather than manually enter them on the form. This might be useful if there are a lot of notBlock options that can't be neatly specified with a prefix. Suppose you create a text file called notBlocks.dat. Put regime@period pairs separated by spaces, commas or semicolons in the file. There can be 1 or more regime@period pairs per line. If you put the file in a sub-directory of the Habplan3/example directory called myProject, then enter the following in the notBlock entry field: "myProject/notBlocks.dat" without the quotes. Then Habplan will read the notBlock options from this file. If you put the file elsewhere, you'll need to enter a complete path, which also reduces the portability of this project.
If the goal on the Block component is set to 1.0 and Habplan is able to converge for this component, then all blocks that are above or below the limits will be made to conform to within limit sizes. After some more iterations, the graphs will look like those in Figure 4.
These graphs show how much area is in a managed block each year and provide quite a lot of information about block size distribution and trend. For green-up windows greater than 0, not all area in a managed block was managed in the current year.
OBJ = F1 + C1(1) + BK1(1) + BioI1 + BioI2
Edit the properties file if you want a different configuration of components.
What data are required? The first column is polygon id#. Column 2 indicates whether this polygon is a trainer - a 0 means no, anything else indicates the regime that the polygon is a trainer for. The remaining columns are the values of the biological variables the user selected. These variables should be continuous and apply to the polygon throughout the regime. For regimes with only 1 output year, this is no problem - for multiple output years it becomes more limiting. For example, slope applies throughout a regime, but number of loblolly pines at time 0 may not be relevant for a multiyear output regime, since the polygon might get clearcut and replanted midway through the regime.
The data below indicate that polygons 1 and 2 are not trainers. Polygon 3-6 are trainers for options 16, 15, 5, and 10 respectively. Obviously, a polygon can only serve as a trainer for 1 regime. Each option should have at least 10 to 20 trainer polygons so that accurate training statistics are obtained.
1 0 10 124
2 0 263 3
3 16 3456 31
4 15 4451 65
5 5 13534 145
6 10 6782 74
7 0 10 43
8 16 3505 33
9 16 3146 54
10 2 12617 118
11 8 6758 63
The Bio-I component doesn't influence the determination of valid regimes for apolygon. Therefore, you can include training data for any subset of the available regimes; i.e. there is no requirement that every regime have training data included in the Bio-I dataset. When a regime isn't present in the Bio-I data, habplan will skip the Bio-I component when considering the assignment of that regime to a polygon.
For this component, doing things relative to a maximum may be hard to interpret. If you prefer, Habplan allows you to control the proportion of polygons that are assigned to regimes that are at least as good as the Goal Kind value. A value of 1 means you want the proportion of polygons specified by the Goal (discussed next) to be assigned to regimes equal in rank to the highest ranking regime. Note that several different regimes could have the same rank, so you'd be indifferent among them. A value of 2 means you're indifferent to assignments equal to the 2nd ranking regime and higher. In the extreme, if there are a total of 10 regimes and the Goal Rank is 10, then you are indifferent to everything and this Bio-1 component serves no purpose. Note that the ranking is based on the currently valid regimes for the polygon. Valid regimes can changedepending on what components are currently in the objective function.
OBJ = F1 + C1(1) + BK1(1) + BioII1 + BioII2
Use the config option if you want a different configuration of components.
Let's look at some data. The first column is polygon id#, the second
is the option and the last column is the rank. There
is one row for each stand and regime. The following data show a
situation where stand 1 must be assigned to regime 2 (the only
regime given), but stand 2 could be assigned to regimes 1-3 with regime
3 being preferred. The size of the numbers is irrelevant, only the
ranking is important. Therefore ranks 1,2,3 give the same result
as .1, 16, 37.
1 2 1
2 1 0.1
2 2 0.4
2 3 0.5
For qualitative rankings, doing things relative to a maximum value doesn't make sense. In this case, Habplan allows you to control the proportion of polygons that are assigned to regimes that are at least as good as the Goal Kind rank. A value of 1 means you want the proportion of polygons specified by the Goal (discussed next) to be assigned to regimes equal in rank to the highest ranking regime. Note that several different regimes could have the same rank, so you'd be indifferent among them. A value of 2 means you're indifferent to assignments equal to the 2nd ranking regime and higher. In the extreme, if there are a total of 10 regimes and the Goal Kind rank is 10, then you are indifferent to everything and this Bio-2 component serves no purpose. Note that the ranking is based on the values in the Bio-2 input data and the currently valid regimes for the polygon. Valid regimes can change depending on what components are currently in the objective function.
This is a top level component and doesn't require a specific parent component. You can have as many of these components as you want. A spatial model component (SMod) is configured by supplying parameter values to a spatial model. For example, if you want regime 1 to be close to other regime 1's you enter 1,1,-1 on the SMod entry form. If you want to put twice as much effort into keeping regime 1 away from regime 2 polygons you enter 1,2,2. The first two integer values give the index of the regimes, and the third integer is the relative weight given to the spatial biasing done by the program. A negative number means to put the regimes together, a positive number means to keep them apart if possible. On the SMod entry form, you specify a goal which tells the program in general how hard to work to achieve your indicated spatial model objectives. A goal of 1 means to work very hard at it, while a goal of 0.5 means only about 50 percent of the polygons need to comply.
NEW: There are 2 new spatial models that can be selected on the Spatial Model Form: Model 0 or Model 1 . Model 0 is the old standby that takes 3 input values, i.e. 2 regimes followed by a positive or negative number to indicate if these regimes should be close or apart. Model 1 takes 4 input values as follows: DN, BP, 0.75, -1; where DN and BP are the regimes, 0.5, is an area proportion, -1 says to put DN and BP regimes together.
The Area proportion is a new twist. Suppose the area of the DN polygon is A, then model1 allows you to request that DN polygons have a specified proportion of their area in surrounding BP polygons. Input like "DN,BP,0.75,-1" says that Habplan should try to set things up so that the area of BP polygons near each DN polygon is at least equal to 75 percent of the DN polygon area. This is more specific than model0, which just says to put DN and BPpolygons together if possible. Figure 5 is an example of what the input form would look like. Note that only 3 values need to be entered for model0, but if 4 values are present, model0 ignores the third value (the proportion).
You might want more than 1 spatial model component to give different levels of emphasis to different spatial goals. For example, you might feel strongly about keeping regimes 1 and 2 separated, since 1 is clearcutting and 2 represents an endangered species habitat. At the same time, you might want regimes 3, 5, and 7 to be close together although this isn't as important as keeping 1 and 2 apart.
A typical model with 1 component each for flow, ccFlow, and blocksize and a spatial model component looks like this:
OBJ = F1 + C1(1) + BK1(1) + SMod1
Use the config option file if you want more or less of these components.
As mentioned 2 sections above, the Model1 option requires 4 pieces of input instead of 3. The third item is the proportion of area in the second regime that should be surrounding the first regime. Read the verbage before the Spatial Model Form picture up above for more on this.
Remember that CCFlow and Block components have a parent Flow component. They cannot be included in the objective function unless their parent is in too. Their datasets must include the same polygons that the parent Flow component has in its data. This also means that any neighbors in the Block data must show up in the parent Flow data. Usually, you will use the Block data for the CCFlow component as well. There is some error checking when the data are read that should warn you about these problems, which cannot be ignored or Habplan will likely crash.
Usually, each polygon has only a subset of regimes that are valid. Valid regimes are indicated to Habplan by the way the flow and bio-2 datasets are constructed. Specifically, only include lines in these datasets for valid regimes. Habplan checks all flow components and bio-2 components to determine the valid regimes. If anything is included for a regime in both the Bio-2 or Flow data, then Habplan assumes its a valid regime for that polygon.
Note that older versions of Habplan allowed you to denote invalid regimes in Bio-2 with a rank of 0. This is inconsistent with the way things work with the Flow components so it is discontinued. This also allows you to use 0 when you're ranking on PNV, which might be appropriate. In fact, negative ranks are also allowed.
If there are 10 regimes for polygon xyz in the flow data and 8 regimes in the bio-2 data, then there are at most 8 valid regimes. If there is no overlap between the regimes included in the bio-2 and the flow datasets, then you'll get an error message like this "no valid regimes for polygon xyz". This also implies that the valid regimes for a polygon can change when a new component is added to the objective function. In fact, the new component could make a polygon's current regime invalid. When this happens Habplan will switch to a new valid regime ASAP, but not instantly.
Do-nothing options are specified in the Flow components as contributing to year 0, or alternatively as having 0 flow, or both. Note that a do-nothing in one flow component might result in output for another flow component. For example, doing nothing relative to clear-cutting could result in a flow of habitat. The do-nothing option must also appear in Bio-2 components in order for it to be valid for a polygon. This implies that it must be ranked relative to the other regimes. Hopefully, you can think of a meaningful way to accomplish this ranking. For example, you might decide that the regime that yields the lowest present net value is equal to doing nothing, and rank them accordingly. A do-nothing indicated by year 0 (in Flow) signals the blocksize components that this option doesn't contribute to blocks. If you want it to contribute to blocks, then you'd have to give the year when it contributes even though the output may be zero. See the material on the Block Form for more on indicating do-Nothing regimes.
The remainder of this section describes the contents of a Habplan
project file which is in xml format. The default location is in the
Habplan project directory. Most of the information in the xml file is
optional. Habplan uses default settings for anything that it doesn't
find in the project file. The contents are surrounded by the
The tag value is the number of iterations you want before Habplan stops.
The tag value is the name of the file containing management unit information.
The tag value is true if you want the units checkbox to be checked automatically when this project is loaded.
The tag value is the path to the Habplan home directory on the computer that made the xml file. This is useful for report generators when the paths to objective function component data sets are abbreviated. Remember that paths can be given relative to the Habplan example dir. This makes it easier to port Habplan project files across computers.
The section surrounded by
tags contains the
information that belongs on the objective function edit forms.
Within this section are tags that surround the information for
each type of objective function component. For example, the tag
indicates that this is where the settings for the ``F1 Component'' belong. Its important that this title exactly correspond to the appropriate component. Even an extra space will cause this to be bypassed.
The Flow components and the Bio2 component tend to be the most
important objective function components. This is the Bio2 tag
for the first Bio2 component:
The Biol2 component will contain the sum and the maximum possible value that show at the bottom of the Bio2 edit form when Habplan is running. This is written into the project file when you save it from Habplan. It's useful information for a report generator, but it isn't used when Habplan reads the file.
The section surrounded by tags contains the information to control the settings on the BestSchedule control form. This determines the configuration of the fitness function that defines the best schedule. The ``weight'' values are usually a string of 1s separated by commas. These are the relative weights of each component. There is one entry for each component that shows on the Habplan main form.
The ``state'' values usually consist of a string of 0s and a single 1 separated by commas. The 1 indicates which component will be checked on the BestSchedule form so that it has its attained goal added to the fitness function. Usually, you want the Bio2 component to be checked. See the section on the ``Fitness Function'' to learn more.
The section surrounded by tags contains the filenames that go in the output dialog form that appears when you select ``output'' under the Habplan ``file'' menu. This output dialog provides another mechanism for saving Habplan output. Note that filenames can be specified relative to the Habplan example directory. This makes project files more portable across different computers.
The file path names also contain a ``check'' element. If check=''true'', then the associated file was made when the last save was done with ``import=yes''. If you check the import box on the Habplan Output Control form certain output data sets will be automatically created when you save the current schedule for later import. This is very useful for generating reports. Specifically, all flow and block data sets that can be created from the output form will made using the current schedule. A flow dataset can be created if the flow data for a particular component has been read. A block dataset can only be made if the block component is currently in the objective function. Look at the manual section on ``Habplan Output to Files'' for more information.
The section surrounded by tags contains the information needed to drive the GisViewer, which dynamically displays a shapefile as Habplan runs. See the ``Habplan GIS Viewer'' section for information on this viewer.
The tag value is the path to the shapefile that corresponds to the polygons in this project. The value is the path to an ascii or dbf file that corresponds to the shapefile. The lines in this file must have a one-to-one correspondence with the shapefile.
The remaining tags in this section are for internal use by Habplan. They
give red, green, blue color values that must be between 0 and 255. The
tags that determine the regime display colors are the most important:
If you want to change these manually, you can consult an RGB color table that could be found on the Internet.
The section surrounded by tags contains information about the Habplan schedule that was current at the time this project file was saved. This is used mainly if you want to import this schedule next time you load this project file. The config tag is somewhat important, because it is used to give the user a warning if they are loading a project that requires a different objective function configuration. Habplan always starts up with the previous configuration. It needs to be restarted if you want it to reconfigure.
The config value is a string of numbers that encodes the objective
First is the number of Flows (NFLOWS), then there are NFLOW pairs of numbers that give the number of CCFlow and Block components associated with each Flow component. The last 3 numbers in the string give the number of Biol, Biol2 and Spatial Model components.
The number of polygons for this problem is given by this tag:
The components that were checked (in the objective function) at the time
of the last save is given by this tag:
This is simply a string of 0s and 1s, where 1 means the corresponding component was checked. There is one item in the string for each component, and they are in the same order as displayed on the Habplan main window.
The tag value is ``true'' if you asked for the last schedule to be saved for importing on the next run. Note that true/false tags only do something when they are set to ``true'';
Finally, there is one tag for each polygon that gives the polygon id
and the regime that was assigned at the save time:
If there are 4000 polygons in this project, then there will be 4000 of these tags.
For the brave among you, this should give you enough information to manually edit a project xml file. Remember that Habplan will use whatever is in the file, but usually won't complain about missing items. It will complain if the file doesn't follow basic XML formatting rules. Save a backup file before manually editing this file in case you make a mistake, otherwise you could lose the settings for your project.
When a schedule is saved for later import, some output files will be automatically created if you check the import box on the Habplan Output Control form. This is useful for report generation. Habplan will save all flow and block files that can be saved whenever a schedule is saved for import. The schedule and graph files are made only if they are checked.
Habplan will automatically erase pre-existing versions of these files before automatically creating a new file. If a flow component dataset has been read, then the associated save file will be created even if the flow component is not currently in the objective function or checked on the output form. A block savefile will be created if the block component is currently in the objective function (after deleting the pre-existing file). Any files that were created will be checked on the output form after a schedule is saved for later import.
The path to the file is in the project XML file and the ``check'' element is set to ``true'' to indicate that it was saved when the last schedule was saved for import. This is useful if you want to generate a special report. The xml project file has the schedule, and the flow information is contained in the output files.
The graphs also support double buffering, which means that the graph is first drawn to an off-screen buffer and then quickly transferred to the screen. This is the default mode and seems to work best on windows machines. However, double buffering on Solaris (Unix) makes program response sluggish, so try shutting it off for Unix. You'll notice that you can also turn on yAxis grid lines and add titles to the graphs. The graphs don't compete with commercial graphics packages, but they are adequate and free of license restrictions.
The underlying spatial data come from a standard ArcView TM shapefile containing polygon data. The path is specified by pushing the file button to get a file browser window, or by simply entering the path by hand. The order of the polygons in the shapefile must correspond to the order of the polygons in the other Habplan input files.
The GISViewer will auto-update after a specified number of Habplan iterations, or by manually pressing the "Draw" button. This provides a dynamic GIS display for the Habplan planning process. Now for a brief description of the other controls in the Habplan GIS Viewer window.
The first column shows the color, and the second column shows the regime. To change colors, you select a row in the table and then press the "Color" button to open a color chooser. Otherwise, this table has much of the same functionality as Habgen tables. Press the "Init" button to restore everything to green. Use "Undo" under the "Edit" menu to cancel the last "Init".
Likewise you can color according to individual polygons by selecting the "Polygon" button. Then a "Polygon Color Table" (Figure 8) can be opened by pushing the "Table" button:
It might sometimes be useful to read a file into the Polygon Color Table that contains information about individual polygons (use READDATA under the "File" menu). You can read either text files or dbf files. Then you can use the SelectionTool from the "Tools" menu to select rows in the table according to values of these variables. For example, you could select all rows with and change their color to yellow. Then redraw the display to see where all high BA stands are located. If you click on a polygon, the corresponding row in the Polygon Color Table will be highlighted. This lets you quickly see the GIS data associated with the selected polygon.
In fact, the Polygon Color Table always controls the colors of the display. When you select "regime" mode, the colors in the Polygon Color Table are first changed according to the regimes currently assigned, then the display is updated from the Polygon Color Table. Therefore, the colors in the polygon color table will be changing automatically when you are in regime mode.
WARNING: You may not be able to use the auto-update feature with a slow computer. If it takes your computer too long to redraw the GIS display, you will find that all other Habplan graphs will freeze. Try moving the slider further to the right to impose a bigger time delay if this happens. For computers slower than 300MHZ, you may have to manually redraw the GIS Display be pressing the draw button and leaving the auto-redraw iterations set to 0. On fast computers, the slider will slow things down so you have time to look at the GIS viewer before the next change.
HINT: The default action is for the GISViewer to draw to an offscreen buffer and then quickly display the buffer. This can give the appearance that nothing is happening as you wait for large files to be displayed. You can have things drawn directly to the screen by going to the "Option" menu on any Habplan graph to turn off DoubleBuffering. This may lead to unpleasant flickering, however.
You can simultaneously pull the regimes from multiple polygons into the regime editor by pushing the ``Polys'' button. This will pull the data from any polygon that is selected in the Polygon Color Table (PC-Table). The PC-Table has it's own selection tool that lets you select multiple polygons according to certain criterion. Suppose you wanted to only allow a Do-Nothing regime for Pine stands that are older than 100 years. Simply select them in the PC-Table, press the ``Polys'' button, and then select all the DN regimes in the regime editor table. Then press the DelNot button to delete all not selected regimes. Now you can save these results or apply them immediately, or both.
Before using this tool, recall that FLOW and BIO2 components can have different regimes listed for a polygon. Habplan will consider assigning the regimes that appear in all FLOW and BIO2 components (entered in the objective function) for a polygon. Therefore, if you want to reduce the number of valid regimes for a polygon, you only need to edit one of the FLOW or BIO2 component data sets. Select the component you want to edit at the top of the Regime Edit Table. Note that this is also a convenient way to look at the data for each polygon even if you don't want to edit the data. You are not allowed to create regimes that don't appear anywhere in the original data, but you can use regimes that did not appear in the original data for the polygon being edited.
The buttons in the lower left (Figure 9) allow you to delete selected rows or to delete NOT selected rows. The black buttons (lower middle) effect saved regime edits. Note that the ``Save'' button is not enabled until you do some editing. If you press the ``SAVE'' button, what is currently in the edits table will be appended to a file with a name derived from the file that contains the data for the objective function component being edited. Suppose the data file for the F1 component is ``F1.data'', then the regime edits will go into ``F1.data.edits'' in the same directory. Likewise, the center ``Delete'' button will delete this file. The ``Apply'' button will apply the edit file. The ``ApplyThis'' button will apply the edits in the table, not the edits in the file. You can also read and write data from or to files by making the appropriate selection under the ``File'' menu at the top left of the table.
Finally, everytime you right click on a polygon, its data for the selected component will overwrite whatever else is in the table, unless you check the ``Append'' box at the top of the table. In append mode, right clicking on another polygon causes its data to be appended to the top of the table. This allows you to do simultaneous edits on blocks of polygons.
The files you create with the Regime Editor will be automatically read by Habplan next time you reload this problem. You will be notified of this in the Habplan Console window as the .edits file is read. A .edits file has the same format as the data file for the corresponding objective function component. Therefore, you could create a .edits file without using the Regime Editor. In any event, don't leave .edits files hanging around in your data directory unless you want Habplan to actually use them.
The algorithm begins by getting a sorted list of all polygons managed in year t that could be block anchors. Only the first in the list is guaranteed to be an anchor under the naming convention described above. Thus, STDS(1) becomes the first member of the vector blockinc for block(STDS(1),t). Then the algorithm finds all neighbors of blockinc(1) that are managed under a block option. Then it finds all neighbors of these neighbors recursively. Then it computes the size of the block. Then it removes all polygons from the STDS list that were already included in the current block, since they can no longer be anchors. The algorithm goes back to step 2 and gets the next block unless STDS is empty, which means that all blocks were found.
Therefore, the highest fitness value goes to the schedule where all objective function components have converged and the achieved goal(s) on selected components are the highest. Note that Bio2 can be used to rank on Present Net Value, so the fitness function can be used to save schedules where the components have converged and the most PNV is attained. This capability is accessed under the "tool" menu by selecting "BestSchedule". A window such as that in Figure 10 will open.
The window is set to give each converged objective function component a score of 1. Additionally, the Bio2 component is checked, so its achieved goal will be added to the fitness as long as Bio2 is converged. The scores can be changed and any components "checked", which leads to a very flexible method to define fitness.
The "Save N Best" choice tells Habplan how many of the top best schedules to save. You can load one of the saved best schedules by selecting the number of the schedule and hitting the "Load" button. Hit "Restore" to load the schedule that was in effect prior to the last load. The "Reset" button will erase the currently saved best schedules so you can start fresh. Reset is done automatically any time a change is made to your fitness function definition, or an objective function component is added or removed.
The latest addition to the BestSchedule window is the "Dynamic" check box which implements the dynamic weighting algorithm described in Lui et al. (2001, JASA 96(454): 561-573). If you read the JASA article, this is the QType method with theta=1.0 and a=1.1. This is an enhancement to the Metropolis algorithm used by Habplan. The dynamic weight allows the Metropolis algorithm to become more flexible in its search for new solutions, at least until the dynamic weight converges. Try checking this box when Habplan seems to have done as well as it can relative to finding good solutions to your problem. The dynamic weight will mix things up a little and might allow Habplan to reconverge to a slightly better solution space.
On the main menu, components that have acheived their goals have red text, over-achievers turn yellow, and under-achievers stay at the default textcolor. This makes it easy to monitor progress as Habplan is running. When all components are either red or yellow, this indicates convergence on your objective function goals. At this point, you might want to increase the goal on one or more of the components to see if it can be attained without losing convergence on some other component. If one of the components relates to economic attainment, say a biological type II based on ranking by PNV, then you would likely want to increase this goal. In this way, you can get Habplan to put more weight on PNV and also discover which components are limiting further attainment of PNV. Habplan does not automatically maximize attainment of any particular value, but it can be made to do so by adjusting the goals upward after initial convergence. Likewise, if a goal is set too high initially, this may prevent overall convergence. There is some art to attaining both convergence and a near optimal solution, since only you can define optimal for your problem.
The BlockSize component may indicate convergence even when your min and max block sizes are violated. In this case, convergence indicates that Habplan can't do any better than what it currently has.
The actual weight is shown on the component's edit form. These should initially be set to 1.0, unless you have some reason for doing otherwise. When you save the settings by selecting File - Save, you will be asked if you want to reset all weights to 1.0. The recommendation is to say "Yes". You can also reset the weights to 1.0 manually or by selecting Misc - Weights=1.
Upon executing Habplan, and loading the desired data file (as described previously), click on ``Tool'' and then ``LpMPS'' in order to open the LP MPS Generator form (Figure 11). A default file-pathname is given for the MPS output file that will be created. This pathname can be changed according to where you want the file to be saved. Now check the objective function components that you want to contribute to the MPS output. Notice that checking a component causes the associated ``Config'' button to be activated. Also notice that the BK and SMod components are not offered as options for entry as objective function components. This is obviously because LP can not take spatial distribution into consideration in our formulation.
In order to constrain a given component, click on the associated ``Config'' button. A form opens (Figure 12) that gives you the option to constrain the component consistently throughout the entire planning period, or only for select years. The default setting on this form is ``unSelect'', which simply leaves the component unconstrained. If you choose to constrain the component for the entire planning period, click ``Select'' as shown in Figure 12. Now specify lower and upper flow limits as proportions of the previous year's flow. If you choose to constrain flows only for select years, click ``Select'' and enter the year, and lower and upper flow bounds as shown in Figure 13. These bounds are entered as actual values, not proportions of the previous year's flow. It is important to remember that, when constraining a component for select years, the first and last triplet entries must be those for the first and last years of the planning period, respectively. In between these first and last years, however, one can enter flow bounds for as many interim years as you please. Whether constraining components for the entire planning period, or for select years, the blocks in which the constraint proportions or values are entered, respectively, will remain red until a valid entry has been made, after which they will change to white. If at any time, during the setup or the running of the LP problem, you decide to close a Config form, the latest constraint entries will be maintained.
Once all the objective function components have been constrained, or left unconstrained, according to your satisfaction, the next step is to create the MPS file. This is done by clicking on the ``MPS'' button on the LP MPS Generator form (Figure 11). Unless otherwise specified, this MPS file will be saved under the default file-pathname. On pushing the ``MPS'' button, a progress meter will pop up. This progress meter monitors the status of the MPS file. While the file is being created, the progress meter will remain active, and display ``Working'' in the green box, whereas once the file creation is complete, the progress meter will no longer be active, and it will display ``Finished'' in the green box. Once the MPS file has been created, this progress meter can be closed. Now click on the ``LpOpen'' button on the LP MPS Generator form to open the ``Run LP Solver'' window (Figure 14). Next, in order to run lp_solve, click on the ``LP'' button on the ``Run LP Solver'' window. Once again, a progress meter will pop up, monitoring the status of the LP-run. This progress meter can be closed during the LP run without killing the process, but it is advisable to only close it once it indicates that the LP-run is ``Finished''.
Now, in the ``Run LP Solver'' Window, the LP solution is displayed (Figure 14). This solution is saved in an output file, designated by the default ``Output File Name'' on the ``Run LP Solver'' window. If the ``View Output'' block is checked, the objective function value is displayed, along with all regime assignments, and the total number of assignments. If ``View Output'' is not checked, only the objective function value and the number of regime assignments are shown. In order to view the various component graphs associated with this solution, open the graphs from the main Habplan window and click ``Get'' on the ``Run LP Solver'' window. This ``Get'' function pulls the LP results into Habplan. For stands that have more than 1 regime in the LP solution, Habplan will create an ``integer'' solution by assigning the stand to the regime with the largest proportion. If, at any time during the LP run, you click the ``Close'' button on the ``Run LP Solver'' window, the ``Run LP Solver'' window will close, and the LP run will be terminated. The same is true for the ``Close'' button on the ``LP MPS Generator'' window.
In order to save the various components' output for an LP solution, first click on ``File'' (on the main Habplan window), and then ``Output''. This will open a ``Habplan Output'' dialog box. For further instructions, refer to Section 13: Habplan Output to Files.
The linear programming algorithm is well known and is based on
maximizing or minimizing an objective function subject to
constraints. It is generally specified as:
The components of Figure 15 and their relation to the LP setup will be explained briefly, row by row. In the first row (column labels), the decision variables, , represent the proportion of polygon i that is assigned to regime j (correspond to x in the general LP format). The variables defined as , are (F)low accounting variables, that represent the volume flow of flow component n in (Y)ear t. The Row Type determines the sign of the mathematical expression, where:
The RHS (Right Hand Side) refers to the value of the right hand side of the mathematical expression (corresponds to r in the general LP format).
The objective function coefficients (second row), c() (Figure 15), are taken directly from the Bio2 file (correspond to c in the general LP format). The following is an example of data from the Bio2 file upon which the example tableau (Figure 15) is based:
1 1 10
1 2 20
2 1 30
2 2 40
The numbers in the first column represent the polygon ID, the numbers in the second column represent the regime assigned to the polygon, and the numbers in the third column represent the contribution of each decision variable (c()) to the objective function value. So, the objective function for this example would read as follows:
The third and fourth rows in Figure 15 constitute the acreage constraints of the form:
This set of constraints is imposed to keep LP from assigning more than 100% of the polygon to its set of valid regimes. These constraints are named Pi, where i is the polygon number.
The fifth and sixth rows in Figure 15 constitute the flow accounting constraints, which are included to allow LP to keep track of annual outputs of items such as clearcut acres and wood flow. Thus, these are not constraints in the true sense of the word, but rather perform a ``summing'' function. These constraints are of the form:
1 1 1 100
1 2 2 200
2 1 1 150
2 2 2 200
The numbers in the first column represent the polygon ID, the numbers in the second column represent the regime assigned to the polygon, the numbers in the third column represent the year in which output occurred, and the numbers in the fourth column represent the contribution of each decision variable to the flow volume of a given year ().
The seventh and eighth rows in Figure 15 constitute the flow constraints, which are different from the flow accounting constraints. These constraints are imposed to control the trend and variability of periodic flow, and have the following form:
where U and L are user-defined upper and lower proportions to control the change in flow from one time to the next.
An LP matrix, similar in format to that shown in Figure 15, is internally generated in Habplan. This matrix is then output to a standard MPS format for LP 2. Based on this MPS input file, the LP is subsequently solved using lp_solve 3. Lp_solve is a free linear programming solver with full source, examples and manuals.
This simple example should give the user a better understanding of the basic LP formulation. Of course, when thousands of ploygon-regime combinations are introduced, coupled with multiple flow components, an LP problem can soon become rather large and complex.
Do the following to create a Habplan server:
At this point you have a Habplan server that may be accessible from a client machine. If the server runs Windows 95 or 98 you are set. However, Unix and NT machines will not allow just any machine to have access, for security reasons. Suppose the client machine's internet name is "client.treecompany.com". To allow access to a Unix server, you enter the following command from a shell on the Unix server:
To allow access to an NT server, client.treecompany.com and the server must be in the same workgroup.
Repeat these steps for each server. Note that you can run any combination of Unix and Windows machines. Finally, remember that any machine on the network can be a server, as long as it has Java and Habplan installed (and a running rmiregistry). Once you have installed habplan on the first server and comprehend the steps involved, additional servers can be done easily.
Make sure there is no CLASSPATH environment variable before starting the rmiregistry. Check this by entering the "set" command in a command window. If CLASSPATH appears you can temporarity get rid of it for the current window by entering "set CLASSPATH=" without the quotes. Initial testing indicates that this approach to parallel processing works well, but anything taking place over the network is prone to more errors than when working on a single machine.
The job you submit is exactly the job that you currently have set up on the client machine. Usually, you would first work on the client to develop an objective function and a starting point schedule. Then set the number of iterations and submit this as a job to a server. The server will begin with your starting schedule and run the specified number of iterations in an attempt to improve the schedule. The status column will update you on the number of completed iterations every 4 seconds, unless you change the update interval by selecting the timer button. At any time you can "Kill" a job or "Retrieve" the current results from the selected server. After retrieving the results, either keep them or "Restore" what you had prior to retrieving. Retrieve gives you the best schedule found so far by the server, where best is defined by the fitness function .
You can control up to 20 servers by pushing "addRow" to add space for more servers. This means that you can have 20 machines working on the same Habplan run simultaneously. Your job is then to retrieve the results and keep the best. You can iterate by submitting the current best to the the servers for more processing until satisfactory results are obtained.
Select the server checkbox to turn this instance of Habplan into a server. Press the "Servers" button to get a list of servers on the current machine. Usually, there is only 1. If Habplan detects a previously existing server when you select the server checkbox, an option window will open to allow you to number this server from 1 to 6. Picking the number of an already existing server will over-write it. The most likely error you will encounter at this point will be due to not having started rmiregistry, or there is a CLASSPATH variable causing the registry to look in the wrong place for the server object.
To port to another computer, follow these steps: (If you don't have the jar command, replace the jar steps with zip. JAR AND ZIP ARE THE SAME)
This seamless and simple porting process requires that you install all data files in a subdirectory of Habplan3/example. Then you indicate the path to your data on the Habplan edit forms as follows: "myDataDir/flow.dat". Habplan defaults to the "example" directory when a partial path is specified. This allows you to port across machines with different file structures, without changing paths. If you use full paths to your data files, the paths will have to be edited after porting. The same goes for saving settings under the File menu. Save settings into the "hbp" directory, which comes up as the default, to enable seamless porting.
There is a file in the hbp subdirectory where you put the habplan class files called
You can edit this file to control the colors of the objective function forms, and how many oldjobs should be remembered and displayed at the bottom of the file menu. This feature is similar to the way word processors let you quickly jump to recent files that you've edited.
Colors for the component forms are specified in hexadecimal. Hypertext pages also control colors with hexadecimal code, so look in an HTML manual if you want to understand hexadecimal color codes.
The following properties can be adjusted in the properties.hbp file:
There is an "example" directory that comes with habplan that contains the example data in subdirectories. When an incomplete path is given, Habplan looks for data here. You may find it useful to put your data files in the example directory too, if you want to transport Habplan to different machines easily. See porting to other computers for more information on this. Also, Habplan?/hbp/oldjobs.hbp contains the names of files that store previously used settings that appear at the bottom of the file menu. Edit this if you want to alter these. These files contain the form settings. When no path is given, habplan assumes these reside in the habplan hbp directory. The convention is to end files pertaining to habplan settings with the .hbp extension.
In the future, this same approach may be used to allow users to write custom input routines so they don't have to use the default input formats for data. This would be a bit more complicated, however.