MIMIC runs on Solaris version 8 and newer on the Intel platform. Even though MIMIC may run fine on older Solaris releases for light use, for heavy use it is highly recommended to run it on Solaris 9 with the latest patches. The following sections detail some of the most common problems encountered on Solaris, and their fixes.
On older versions of Solaris the tar utility provided by the OS has limitations regarding extraction of our distribution image. For that reason you should use the GNU tar utility we provide on the download page for Solaris versions 9 and older. Solaris version 10 and newer seem to have no such problem.
MIMIC is a specially memory-intensive application, and it needs plenty of RAM and swap space for the more complex device simulations. We recommend an absolute minimum 64MB of RAM and 128MB of swap space to start (1-agent simulation). The common 25-agent simulation should have at least 256 MB RAM and 512MB swap space. Verify that you have enough swap space with the following command from any shell:
It should display something like
total: 15188k bytes allocated + 4540k reserved = 19728k used, 99100k available
You should as a rule of thumb have twice the swap space as your physical memory. To determine the amount of physical memory on your machine, do something like:
% dmesg | grep mem mem = 131072K (0x8000000) avail mem = 126312448
If you need more swap space, you can easily create it with the mkfile(1M) and swap(1M) commands. A 128MB swap file on the local filesystem can be created with the following commands:
mkfile 128M [PATH-OF-LOCAL-SWAP-FILE]
swap -a [PATH-OF-LOCAL-SWAP-FILE]
Also, on Solaris the /tmp directory can be mounted on the swap space, thus they encroach on each other. Eg.
% /usr/ucb/df /tmp Filesystem kbytes used avail capacity Mounted on swap 223568 25200 198368 12% /tmp
You can find out more with
On Solaris currently the absolute limit on the amount of virtual memory accessible for the MIMIC Simulator is below 4 GB due to 32-bit addressing limitations. We have run MIMIC on Solaris 2.6 with upto approximately 3.5 GB of virtual memory.
The optional 64-bit executable on Solaris removes this limitation, but will run slower than the 32-bit executable.
If there is another SNMP Agent (e.g., snmpd) running on the system, you need to kill it prior to running MIMIC, because otherwise there will be conflicts when MIMIC tries to attach to the SNMP port. On newer versions of Solaris, you need to use the Solaris Service Management Facility (SMF) to disable the SNMP service, eg.
svcadm disable snmpdx
svcadm disable sma
On older versions of Solaris, the following commands (given as root) should accomplish the same:
# ps -ef | grep snmpd root 4334 4332 0 10:12:10 pts/7 0:03 /etc/snmpd root 4665 4660 1 17:08:43 pts/0 0:00 grep snmpd # kill 4334 ### PID of the running snmpd
You need to perform an extra kernel configuration step on Solaris prior to running MIMIC to enable more than 255 addresses for agent instances. On Solaris 2.6 and later you can, as root, use ndd(1M) to set the necessary parameter. For example, to allow 1000 addresses:
# /usr/sbin/ndd /dev/ip ip_addrs_per_if 256 # /usr/sbin/ndd -set /dev/ip ip_addrs_per_if 1024 # /usr/sbin/ndd /dev/ip ip_addrs_per_if 1024
To apply this parameter permanently, you need to add this command to the boot startup files (eg. in /etc/rc*). Contact your MIS department on this issue.
NOTE: There is an absolute limit of 8192 addresses per network interface on Solaris. If you want to run agents with more than 8192 addresses, you will require multiple interface cards.
Solaris assigns the same MAC address to all Ethernet Network Interface Cards (NIC). This will cause problems if you want to connect all NICs to the same LAN, and run MIMIC over multiple NICs.
The Solaris Developer Connection knowledge base details the workaround:
I'd like to setup my server to service multiple subnets via my switch hub setup, without using VLANs.
Avoiding VLANs allows me to patch in machines and configure them to a subnet, without restrictions which makes best use of the hub(s) available ports.
When I add a second card into the equation, my server hangs.
The server hanging is due to the ethernet card(s) taking its MAC address from the OBP (Open Boot Prom) Now the hub has a problem in routing packets, as two ports hold the same MAC address.
In order to achieve this setup we must supply unique MAC addresses to any extra cards added to the server.
Edit /etc/init.d/rootusr and locate the section of the script where the interfaces are plumbed in. Directly after this add a line
eg. ifconfig hme1 ether 08:01:20:44:33:22:11
save and reboot."
IPv6 is supported only on Solaris 8 and later. This is an OS platform limitation.
Prior to running agents with IPv6 addresses, the network interface needs to have the IPv6 module "plumbed" and running. This is a one-time configurable, eg.
# ifconfig -a lo0: flags=1000849
mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 hme0: flags=1000843 mtu 1500 index 2 inet 188.8.131.52 netmask ffffff00 broadcast 184.108.40.206 ether 8:0:20:b0:27:7e # ifconfig hme0 inet6 plumb up # ifconfig -a lo0: flags=1000849 mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 hme0: flags=1000843 mtu 1500 index 2 inet 220.127.116.11 netmask ffffff00 broadcast 18.104.22.168 ether 8:0:20:b0:27:7e hme0: flags=2000841 mtu 1500 index 2 ether 8:0:20:b0:27:7e inet6 fe80::a00:20ff:feb0:277e/10
This is done by default on IPv6-enabled systems.
When using the route command to configure IPv6 routes, you need to use the -inet6 command line option, eg.
# netstat -r -n Routing Table: IPv4 Destination Gateway Flags Ref Use Interface -------------------- -------------------- ----- ----- ------ --------- 22.214.171.124 126.96.36.199 U 1 2332 hme0 188.8.131.52 184.108.40.206 U 1 0 hme0 default 220.127.116.11 UG 1 295 127.0.0.1 127.0.0.1 UH 4 99 lo0 Routing Table: IPv6 Destination/Mask Gateway Flags Ref Use If --------------------------- --------------------------- ----- --- ------ ----- fe80::/10 fe80::a00:20ff:fef5:57c U 1 4 hme0 ff00::/8 fe80::a00:20ff:fef5:57c U 1 0 hme0 ::1 ::1 UH 1 8 lo0 # ping -c 1 3001::1 ICMPv6 No Route to Destination from gateway 3ffe::1 for icmp6 from 3ffe::1 to 3001::1 ^C # route add -inet6 3001::/64 fe80::a00:20ff:fef5:57c 0 add net 3001::/64 # ping -c 1 3001::1 3001::1 is alive # netstat -r -n Routing Table: IPv4 Destination Gateway Flags Ref Use Interface -------------------- -------------------- ----- ----- ------ --------- 18.104.22.168 22.214.171.124 U 1 2338 hme0 126.96.36.199 188.8.131.52 U 1 0 hme0 default 184.108.40.206 UG 1 295 127.0.0.1 127.0.0.1 UH 4 99 lo0 Routing Table: IPv6 Destination/Mask Gateway Flags Ref Use If --------------------------- --------------------------- ----- --- ------ ----- 3001::/64 fe80::a00:20ff:fef5:57c U 1 2 hme0 fe80::/10 fe80::a00:20ff:fef5:57c U 1 5 hme0 ff00::/8 fe80::a00:20ff:fef5:57c U 1 0 hme0 ::1 ::1 UH 1 8 lo0 # route delete -inet6 3001::/64 fe80::a00:20ff:fef5:57c delete net 3001::/64 # ping -c 1 3001::1 ICMPv6 No Route to Destination from gateway 3ffe::1 for icmp6 from 3ffe::1 to 3001::1
The MIMIC Simulator mimicd runs as a setuid-root daemon on Solaris. By default, core files are not produced if it terminates abnormally. If you see the mimicd crashing, you need to use coreadm to enable core dumps to help us diagnose the problem:
coreadm -e proc-setid
It can be disabled anytime using
coreadm -d proc-setid
For detailed help use
If you are setting up a partition on your hard disk to contain MIMIC data, you will want to consider tuning the block and inode allocation of the filesystem (on Solaris 8 or newer!). Since MIMIC data typically consists of many small files, on Unix filesystems it uses up a filesystem resource called "inodes". When these run out, you will get error messages such as "file system full", even though the output of df (1) shows plenty of space available.
To change the inode allocation from the default for the newfs utility, use the -f command line option to reduce the fragment-size to 512 bytes (to reduce internal fragmentation), and the -i command line option to set the bytes-per-inode to 512 (to allocate the maximum number of inodes).
For details see the newfs (1M) man pages or consult your system administrator.
MIMIC will run in the global zone regardless of what other zones are configured on the system. MIMIC will run inside a non-global zone provided it is configured properly:
We are constantly working to remove limitations, but currently we know of the following: