HP0-704 real questions | Pass4sure HP0-704 real questions |

Pass4sure HP0-704 dumps | HP0-704 actual questions |

HP0-704 TruCluster v5 Implementation and Support

Study steer Prepared by HP Dumps Experts HP0-704 Dumps and actual Questions

100% actual Questions - Exam Pass Guarantee with towering Marks - Just Memorize the Answers

HP0-704 exam Dumps Source : TruCluster v5 Implementation and Support

Test Code : HP0-704
Test appellation : TruCluster v5 Implementation and Support
Vendor appellation : HP
real questions : 112 actual Questions

Just try these actual exam questions and success is yours.
I am very gratified with the HP0-704 QAs, it helped me lot in exam center. I can in reality arrive for different HP certifications additionally.

Get those actual questions s and vanish to vacations to setaside together.
I used to subsist a lot indolent and didnt want to know-how travail difficult and usually searched quick cuts and convenient strategies. While i used to subsist doing an IT course HP0-704 and it cease up very tough for me and didnt able to find out any steer line then i heard aboutthe web web page which fill been very well-known within the market. I got it and my issues removed in few days while Icommenced it. The pattern and exercise questions helped me lots in my prep of HP0-704 tests and i efficiently secured top marks as rightly. That became surely due to the killexams.

HP0-704 Exam questions are changed, where can i find unusual question bank?
Are you able to scent the sweet scent of victory I understand i am capable of and it is clearly a very stunning smell. You may scent it too in case you pass surfing to this as a artery to setaside together for your HP0-704 check. I did the very aspect right in further than my test and became very satisfied with the provider provided to me. The facilitiesright right here are impeccable and whilst you are in it you wouldnt subsist worried about failing in any respect. I didnt fail and did quite nicely and so can you. Strive it!

Is there HP0-704 examination unusual sayllabus to subsist had?
Mysteriously I answerered shameful questions in this exam. an evil lot obliged it is a fantastic asset for passing tests. I endorse shameful people to certainly expend I study numerous books but neglected to pickup it. anyhow inside the wake of using Questions & answers, i organize the instantly forwardness in planning questions and answers for the HP0-704 exam. I saw shameful of the issues nicely.

HP0-704 certification examination instruction got to subsist this smooth. is simple and solid and you can pass the exam if you vanish through their question bank. No words to express as I fill passed the HP0-704 exam in first attempt. Some other question banks are besides availble in the market, but I feel is best among them. I am very confident and am going to expend it for my other exams also. Thanks a lot ..killexams.

in which can i am getting assist to setaside together and transparent HP0-704 examination?
the exact answers fill been now not difficult to recollect. My information of emulating the actual questions changed intowithout a doubt attractive, as I made shameful right replies within the exam HP0-704. a lot appreciated to the for the help. I advantageously took the exam preparation inner 12 days. The presentation style of this aide became simple with notanyone lengthened answers or knotty clarifications. a number of the topic which can subsist so toughand tough as rightly are coach so fantastically.

it's far unbelieveable, but HP0-704 actual exam questions are availabe right here.
I was very disappointed when I failed my HP0-704 exam. Searching the internet told me that there is a website which is the resources that I requisite to pass the HP0-704 exam within no time. I buy the HP0-704 preparation pack containing questions answers and exam simulator, prepared and sit in the exam and got 98% marks. Thanks to the team.

HP0-704 certification examination is quite anxious with out this solemnize guide.
the fleet solutions made my instruction more convenient. I completed seventy five questions out off eighty well beneaththe stipulated time and managed 80%. My aspiration to subsist a certified pick the exam HP0-704. I got the actual questions manual simply 2 weeks earlier than the exam. thanks.

obtain those HP0-704 questions.
I skip in my HP0-704 exam and that was now not a facile pass however a terrific one which I should inform shameful of us with supercilious steam filled in my lungs as I had got 89% marks in my HP0-704 exam from studying from

in which can i download HP0-704 ultra-modern dumps?
I spent enough time reading those material and passed the HP0-704 exam. The stuff is right, and whilst those are thoughts dumps, that means these materials are constructed at the actual exam stuff, I dont understand those who attempt to complain about the HP0-704 questions being extremely good. In my case, no longer shameful questions had been one hundred% the equal, however the topics and trendy approach fill been certainly correct. So, friends, in case you study tough enough youll accomplish just nice.

HP TruCluster v5 Implementation and

GSSAPI Authentication and Kerberos v5 | actual Questions and Pass4sure dumps

This chapter is from the publication 

This piece discusses the GSSAPI mechanism, in selected, Kerberos v5 and the artery this works at the side of the sun ONE listing Server 5.2 software and what's involved in implementing such an answer. please subsist conscious that here's no longer a paltry task.

It’s cost taking a short seem at the relationship between the universal protection functions application application Interface (GSSAPI) and Kerberos v5.

The GSSAPI does not truly give security functions itself. reasonably, it's a framework that gives protection capabilities to callers in a regular fashion, with more than a few underlying mechanisms and technologies similar to Kerberos v5. The latest implementation of the GSSAPI most effective works with the Kerberos v5 safety mechanism. The most excellent approach to feel in regards to the relationship between GSSAPI and Kerberos is in right here method: GSSAPI is a community authentication protocol abstraction that enables Kerberos credentials for expend in an authentication change. Kerberos v5 fill to subsist installed and operating on any device on which GSSAPI-conscious courses are running.

The encourage for the GSSAPI is made viable in the directory server through the introduction of a unusual SASL library, which is based on the Cyrus CMU implementation. through this SASL framework, DIGEST-MD5 is supported as explained prior to now, and GSSAPI which implements Kerberos v5. extra GSSAPI mechanisms accomplish exist. as an instance, GSSAPI with SPNEGO encourage could subsist GSS-SPNEGO. different GSS mechanism names are in line with the GSS mechanisms OID.

The sun ONE listing Server 5.2 software best supports the expend of GSSAPI on Solaris OE. There are implementations of GSSAPI for other operating systems (as an instance, Linux), but the solar ONE listing Server 5.2 application does not expend them on platforms other than the Solaris OE.

understanding GSSAPI

The accepted protection features application program Interface (GSSAPI) is a typical interface, described by RFC 2743, that offers a confidential authentication and comfy messaging interface, whereby these security mechanisms can subsist plugged in. the most commonly mentioned GSSAPI mechanism is the Kerberos mechanism it truly is in line with stealthy key cryptography.

one of the crucial main points of GSSAPI is that it permits developers so as to add comfy authentication and privateness (encryption and or integrity checking) coverage to facts being omitted the wire with the aid of writing to a lone programming interface. here is proven in design 3-2.

03fig02.gifdetermine three-2. GSSAPI Layers

The underlying security mechanisms are loaded at the time the programs are completed, as opposed to when they're compiled and built. In apply, probably the most everyday GSSAPI mechanism is Kerberos v5. The Solaris OE provides a number of several flavors of Diffie-Hellman GSSAPI mechanisms, which are only advantageous to NIS+ purposes.

What may besides subsist perplexing is that builders may write applications that write directly to the Kerberos API, or they might write GSSAPI applications that request the Kerberos mechanism. there's a huge change, and functions that talk Kerberos directly cannot talk with those who talk GSSAPI. The wire protocols are not appropriate, even though the underlying Kerberos protocol is in use. An instance is telnet with Kerberos is a comfy telnet program that authenticates a telnet consumer and encrypts data, including passwords exchanged over the community shameful over the telnet session. The authentication and message insurance contrivance aspects are supplied using Kerberos. The telnet application with Kerberos best makes expend of Kerberos, which is in accordance with secret-key expertise. despite the fact, a telnet software written to the GSSAPI interface can expend Kerberos in addition to different security mechanisms supported by means of GSSAPI.

The Solaris OE does not deliver any libraries that supply assist for third-celebration agencies to software at once to the Kerberos API. The aim is to motivate builders to beget expend of the GSSAPI. Many open-supply Kerberos implementations (MIT, Heimdal) enable users to jot down Kerberos functions at once.

On the wire, the GSSAPI is suitable with Microsoft’s SSPI and hence GSSAPI purposes can talk with Microsoft functions that expend SSPI and Kerberos.

The GSSAPI is favourite because it is a standardized API, whereas Kerberos is not. This means that the MIT Kerberos progress group may exchange the programming interface anytime, and any applications that exist these days could no longer travail sooner or later without some code modifications. the usage of GSSAPI avoids this issue.

an additional improvement of GSSAPI is its pluggable feature, which is a big improvement, particularly if a developer later decides that there's a much better authentication components than Kerberos, because it can with no concern subsist plugged into the gadget and the latest GSSAPI functions should silent subsist capable of expend it devoid of being recompiled or patched in any approach.

knowing Kerberos v5

Kerberos is a community authentication protocol designed to supply effective authentication for customer/server functions through the expend of secret-key cryptography. at first developed on the Massachusetts Institute of know-how, it's included in the Solaris OE to supply robust authentication for Solaris OE network purposes.

in addition to presenting a secure authentication protocol, Kerberos additionally offers the potential to add privacy steer (encrypted data streams) for far flung applications reminiscent of telnet, ftp, rsh, rlogin, and different typical UNIX community applications. within the Solaris OE, Kerberos can even subsist used to provide potent authentication and privacy assist for community File methods (NFS), allowing comfy and personal file sharing across the network.

because of its frequent acceptance and implementation in different operating programs, together with windows 2000, HP-UX, and Linux, the Kerberos authentication protocol can interoperate in a heterogeneous ambiance, allowing clients on machines running one OS to safely authenticate themselves on hosts of a different OS.

The Kerberos utility is purchasable for Solaris OE versions 2.6, 7, 8, and 9 in a separate kit called the solar business Authentication Mechanism (SEAM) application. For Solaris 2.6 and Solaris 7 OE, sun business Authentication Mechanism utility is blanketed as a piece of the Solaris convenient access Server 3.0 (Solaris SEAS) kit. For Solaris eight OE, the sun business Authentication Mechanism application kit is attainable with the Solaris eight OE Admin Pack.

For Solaris 2.6 and Solaris 7 OE, the sun commercial enterprise Authentication Mechanism software is freely available as a piece of the Solaris handy access Server 3.0 package accessible for down load from:

For Solaris 8 OE techniques, sun enterprise Authentication Mechanism utility is purchasable in the Solaris 8 OE Admin Pack, accessible for download from: material/adminPack/index.html.

For Solaris 9 OE systems, sun enterprise Authentication Mechanism application is already installed with the aid of default and consists of the following packages listed in table 3-1.

desk three-1. Solaris 9 OE Kerberos v5 applications

equipment identify



Kerberos v5 KDC (root)


Kerberos v5 grasp KDC (user)


Kerberos version 5 back (Root)


Kerberos version 5 steer (Usr)


Kerberos version 5 back (Usr) (64-bit)

All of those sun commercial enterprise Authentication Mechanism software distributions are in line with the MIT KRB5 unencumber version 1.0. The client classes in these distributions are commandeer with later MIT releases (1.1, 1.2) and with different implementations which are compliant with the general.

How Kerberos Works

here is an outline of the Kerberos v5 authentication gadget. From the consumer’s standpoint, Kerberos v5 is in the main invisible after the Kerberos session has been started. Initializing a Kerberos session often includes no greater than logging in and featuring a Kerberos password.

The Kerberos gadget revolves across the thought of a ticket. A ticket is a set of digital information that serves as identification for a user or a service such because the NFS carrier. just as your driver’s license identifies you and shows what using permissions you fill got, so a ticket identifies you and your network entry privileges. in the event you function a Kerberos-primarily based transaction (for example, in case you expend rlogin to log in to an additional computer), your gadget transparently sends a request for a ticket to a Key Distribution middle, or KDC. The KDC accesses a database to authenticate your id and returns a ticket that provides you license to entry the different computing device. Transparently means that you accomplish not deserve to explicitly request a ticket.

Tickets fill inescapable attributes associated with them. for instance, a ticket can besides subsist forwardable (which aptitude that it can subsist used on an extra computing device without a unusual authentication process), or postdated (no longer legitimate unless a targeted time). How tickets are used (as an example, which users are allowed to obtain which types of tickets) is decided by policies that are determined when Kerberos is setaside in or administered.

you will commonly descry the phrases credential and ticket. within the Kerberos world, they are sometimes used interchangeably. Technically, however, a credential is a ticket plus the session key for that session.

preliminary Authentication

Kerberos authentication has two phases, an preliminary authentication that enables for shameful subsequent authentications, and the subsequent authentications themselves.

a client (a consumer, or a provider corresponding to NFS) starts a Kerberos session with the aid of requesting a ticket-granting ticket (TGT) from the considerable thing Distribution heart (KDC). This request is often finished immediately at login.

A ticket-granting ticket is required to attain different tickets for particular features. believe of the ticket-granting ticket as some thing comparable to a passport. enjoy a passport, the ticket-granting ticket identifies you and permits you to obtain a lot of “visas,” the spot the “visas” (tickets) aren't for international nations, but for faraway machines or network capabilities. enjoy passports and visas, the ticket-granting ticket and the different a number of tickets fill limited lifetimes. The change is that Kerberized instructions notice that you've a passport and procure the visas for you. You don’t requisite to accomplish the transactions yourself.

The KDC creates a ticket-granting ticket and sends it lower back, in encrypted kind, to the client. The client decrypts the ticket-granting ticket the expend of the customer’s password.

Now in possession of a sound ticket-granting ticket, the customer can request tickets for shameful styles of network operations for as long as the ticket-granting ticket lasts. This ticket constantly lasts for just a few hours. each and every time the customer performs a unique community operation, it requests a ticket for that operation from the KDC.

Subsequent Authentications

The customer requests a ticket for a specific service from the KDC with the aid of sending the KDC its ticket-granting ticket as proof of identity.

  • The KDC sends the ticket for the specific service to the client.

    for instance, believe consumer lucy desires to access an NFS file paraphernalia that has been shared with krb5 authentication required. in view that she is already authenticated (it is, she already has a ticket-granting ticket), as she makes an attempt to entry the data, the NFS client system immediately and transparently obtains a ticket from the KDC for the NFS service.

  • The customer sends the ticket to the server.

    When the usage of the NFS service, the NFS customer instantly and transparently sends the ticket for the NFS provider to the NFS server.

  • The server allows for the customer access.

    These steps beget it seem that the server doesn’t ever communicate with the KDC. The server does, although, as it registers itself with the KDC, simply because the first client does.

  • Principals

    a shopper is recognized via its foremost. A principal is a unique identity to which the KDC can allocate tickets. A major can subsist a user, comparable to joe, or a service, comparable to NFS.

    with the aid of convention, a considerable appellation is divided into three components: the simple, the instance, and the realm. a benchmark primary could be, as an example, lucy/admin@instance.COM, the place:

    lucy is the primary. The primary may besides subsist a person identify, as shown right here, or a carrier, reminiscent of NFS. The basic can even subsist the notice host, which signifies that this considerable is a provider most considerable it really is installation to supply a variety of community features.

    admin is the instance. An illustration is optional in the case of person principals, however it is required for carrier principals. for instance, if the consumer lucy once in a while acts as a gadget administrator, she will expend lucy/admin to differentiate herself from her typical user id. Likewise, if Lucy has accounts on two several hosts, she will subsist able to expend two considerable names with diverse instances (as an example, lucy/ and lucy/


    A realm is a logical network, akin to a site, which defines a gaggle of programs under the identical grasp KDC. Some geographical regions are hierarchical (one realm being a superset of the other realm). in any other case, the geographical regions are non-hierarchical (or direct) and the mapping between both nation-states requisite to subsist defined.

    nation-states and KDC Servers

    each realm should consist of a server that maintains the grasp reproduction of the essential database. This server is called the grasp KDC server. additionally, each realm should comprehend at the least one slave KDC server, which carries duplicate copies of the fundamental database. both the master KDC server and the slave KDC server create tickets which are used to establish authentication.

    figuring out the Kerberos KDC

    The Kerberos Key Distribution heart (KDC) is a trusted server that concerns Kerberos tickets to shoppers and servers to communicate securely. A Kerberos ticket is a shroud of statistics that is introduced as the person’s credentials when trying to access a Kerberized carrier. A ticket carries counsel about the user’s identity and a brief encryption key, shameful encrypted in the server’s private key. in the Kerberos environment, any entity this is described to fill a Kerberos identification is called a primary.

    A considerable could subsist an entry for a particular user, host, or provider (comparable to NFS or FTP) it truly is to fill interaction with the KDC. Most generally, the KDC server system additionally runs the Kerberos Administration Daemon, which handles administrative instructions equivalent to adding, deleting, and editing principals in the Kerberos database. usually, the KDC, the admin server, and the database are shameful on the equal laptop, however they can besides subsist separated if necessary. Some environments may additionally require that assorted geographical regions subsist configured with master KDCs and slave KDCs for every realm. The principals applied for securing each realm and KDC should silent subsist applied to shameful nation-states and KDCs in the community to subsist positive that there isn’t a lone vulnerable hyperlink within the chain.

    probably the most first steps to pick when initializing your Kerberos database is to create it using the kdb5_util command, which is organize in /usr/sbin. When running this command, the consumer has the election of no matter if to create a stash file or not. The stash file is a local replica of the master key that resides on the KDC’s native disk. The grasp key contained in the stash file is generated from the master password that the person enters when first creating the KDC database. The stash file is used to authenticate the KDC to itself automatically earlier than starting the kadmind and krb5kdc daemons (for example, as piece of the machine’s boot sequence).

    If a stash file is not used when the database is created, the administrator who starts up the krb5kdc procedure will must manually enter the master key (password) every time they birth the manner. This may additionally peek enjoy a customary trade off between comfort and safety, but if the leisure of the system is sufficiently hardened and guarded, shrimp or no security is misplaced by having the master key kept in the included stash file. it's suggested that as a minimum one slave KDC server subsist setaside in for each realm to beget positive that a backup is attainable in the suffer that the master server turns into unavailable, and that slave KDC subsist configured with the very level of security as the master.

    currently, the sun Kerberos v5 Mechanism utility, kdb5_util, can create three sorts of keys, DES-CBC-CRC, DES-CBC-MD5, and DES-CBC-raw. DES-CBC stands for DES encryption with Cipher shroud Chaining and the CRC, MD5, and uncooked designators quest advice from the checksum algorithm this is used. by artery of default, the key created could subsist DES-CBC-CRC, which is the default encryption class for the KDC. The nature of key created is specified on the command line with the -k alternative (see the kdb5_util (1M) man page). elect the password for your stash file very cautiously, as a result of this password will besides subsist used in the future to decrypt the master key and regulate the database. The password may well subsist up to 1024 characters long and might comprehend any aggregate of letters, numbers, punctuation, and spaces.

    the following is an example of making a stash file:

    kdc1 #/usr/sbin/kdb5_util create -r instance.COM -s Initializing database '/var/krb5/important' for realm 'illustration.COM' master key identify 'k/M@instance.COM' You may subsist caused for the database grasp Password. it's considerable that you simply not forget this password. Enter KDC database master key: master_key Re-enter KDC database grasp key to check: master_key

    word the expend of the -s dispute to create the stash file. The location of the stash file is within the /var/krb5. The stash file appears with here mode and possession settings:

    kdc1 # cd /var/krb5 kdc1 # ls -l -rw------- 1 root other 14 Apr 10 14:28 .k5.illustration.COM

    The directory used to preserve the stash file and the database should silent not subsist shared or exported.

    secure Settings within the KDC Configuration File

    The KDC and Administration daemons both read configuration counsel from /and so on/krb5/kdc.conf. This file incorporates KDC-certain parameters that govern typical conduct for the KDC and for specific realms. The parameters within the kdc.conf file are defined in detail within the kdc.conf(four) man page.

    The kdc.conf parameters recount places of quite a few information and ports to beget expend of for gaining access to the KDC and the administration daemon. These parameters often don't should subsist changed, and doing so doesn't influence in any brought protection. however, there are some parameters that may subsist adjusted to enhance the ordinary security of the KDC. the following are some examples of adjustable parameters that increase protection.

  • kdc_ports – Defines the ports that the KDC will pay attention on to obtain requests. The regular port for Kerberos v5 is 88. 750 is blanketed and accepted to assist older purchasers that nevertheless expend the default port particular for Kerberos v4. Solaris OE nevertheless listens on port 750 for backwards compatibility. this is no longer regarded a security chance.

  • max_life – Defines the highest lifetime of a ticket, and defaults to eight hours. In environments where it is desirable to fill clients re-authenticate often and to reduce the opportunity of having a most important’s credentials stolen, this value should silent subsist lowered. The suggested cost is eight hours.

  • max_renewable_life – Defines the duration of time from when a ticket is issued that it may well subsist renewed (the usage of kinit -R). The regular cost here is 7 days. To disable renewable tickets, this cost could subsist set to 0 days, 0 hrs, 0 min. The advised value is 7d 0h 0m 0s.

  • default_principal_expiration – A Kerberos most considerable is any entertaining identification to which Kerberos can allocate a ticket. in the case of clients, it is a similar because the UNIX device user name. The default lifetime of any foremost within the realm could subsist defined within the kdc.conf file with this alternative. This should silent subsist used only if the realm will accommodate brief principals, in any other case the administrator will must perpetually subsist renewing principals. usually, this setting is left undefined and principals accomplish not expire. here is now not insecure provided that the administrator is vigilant about getting rid of principals for clients that not want entry to the methods.

  • supported_enctypes – The encryption types supported via the KDC may well subsist described with this choice. at this time, sun business Authentication Mechanism utility only supports des-cbc-crc:standard encryption category, but sooner or later this could subsist used to beget positive that simplest powerful cryptographic ciphers are used.

  • dict_file – The vicinity of a dictionary file containing strings that don't seem to subsist allowed as passwords. A major with any password policy (see beneath) aren't able to expend words present in this dictionary file. here's now not described by default. the expend of a dictionary file is a mighty artery to preserve away from users from growing paltry passwords to give protection to their accounts, and thus helps steer transparent of probably the most habitual weaknesses in a computer network-guessable passwords. The KDC will simplest verify passwords in opposition t the dictionary for principals which fill a password policy affiliation, so it's first rate supervene to fill at least one essential policy associated with shameful principals within the realm.

  • The Solaris OE has a default device dictionary it really is used via the spell program that may besides subsist used through the KDC as a dictionary of typical passwords. The area of this file is: /usr/share/lib/dict/words. different dictionaries can subsist substituted. The structure is one notice or phrase per line.

    here is a Kerberos v5 /and so forth/krb5/kdc.conf instance with counseled settings:

    # Copyright 1998-2002 solar Microsystems, Inc. shameful rights reserved. # expend is area to license terms. # #ident "@(#)kdc.conf 1.2 02/02/14 SMI" [kdcdefaults] kdc_ports = 88,750 [realms] ___default_realm___ = profile = /and many others/krb5/krb5.conf database_name = /var/krb5/foremost admin_keytab = /etc/krb5/kadm5.keytab acl_file = /and so on/krb5/kadm5.acl kadmind_port = 749 max_life = 8h 0m 0s max_renewable_life = 7d 0h 0m 0s default_principal_flags = +preauth needs relocating -- dict_file = /usr/share/lib/dict/words entry handle

    The Kerberos administration server makes it workable for for granular maneuver of the administrative commands by artery of expend of an access control record (ACL) file (/and so forth/krb5/kadm5.acl). The syntax for the ACL file enables for wildcarding of fundamental names so it isn't crucial to record every lone administrator within the ACL file. This feature should silent subsist used with improbable care. The ACLs used by artery of Kerberos allow privileges to subsist broken down into very exact functions that each administrator can perform. If a inescapable administrator handiest needs to subsist allowed to fill study-entry to the database then that grownup should silent not subsist granted complete admin privileges. beneath is a list of the privileges allowed:

  • a – allows the addition of principals or policies in the database.

  • A – Prohibits the addition of principals or guidelines in the database.

  • d – allows the deletion of principals or policies within the database.

  • D – Prohibits the deletion of principals or guidelines in the database.

  • m – allows the amendment of principals or policies within the database.

  • M – Prohibits the change of principals or guidelines within the database.

  • c – allows the altering of passwords for principals within the database.

  • C – Prohibits the changing of passwords for principals in the database.

  • i – makes it workable for inquiries to the database.

  • I – Prohibits inquiries to the database.

  • l – allows for the checklist of principals or policies within the database.

  • L – Prohibits the list of principals or guidelines within the database.

  • * – short for shameful privileges (admcil).

  • x – short for shameful privileges (admcil). just enjoy *.

  • adding administrators

    After the ACLs are install, exact administrator principals may silent subsist brought to the device. it's strongly counseled that administrative users fill separate /admin principals to expend handiest when administering the gadget. for example, person Lucy would fill two principals in the database - lucy@REALM and lucy/admin@REALM. The /admin considerable would best subsist used when administering the equipment, now not for getting ticket-granting-tickets (TGTs) to entry far off functions. using the /admin predominant only for administrative applications minimizes the probability of someone running up to Joe’s unattended terminal and performing unauthorized administrative instructions on the KDC.

    Kerberos principals could subsist differentiated with the aid of the instance piece of their main identify. in the case of consumer principals, the most benchmark example identifier is /admin. it is universal rehearse in Kerberos to differentiate user principals with the aid of defining some to subsist /admin circumstances and others to fill no selected instance identifier (for example, lucy/admin@REALM versus lucy@REALM). Principals with the /admin illustration identifier are assumed to fill administrative privileges described in the ACL file and may handiest subsist used for administrative purposes. A considerable with an /admin identifier which does not healthy up with any entries within the ACL file are not granted any administrative privileges, it might subsist treated as a non-privileged user predominant. additionally, user principals with the /admin identifier are given separate passwords and separate permissions from the non-admin foremost for the very user.

    here is a sample /and many others/krb5/kadm5.acl file:

    # Copyright (c) 1998-2000 by artery of solar Microsystems, Inc. # shameful rights reserved. # #pragma ident "@(#)kadm5.acl 1.1 01/03/19 SMI" # lucy/admin is given complete administrative privilege lucy/admin@instance.COM * # # tom/admin consumer is allowed to question the database (d), directoryprincipals # (l), and changing user passwords (c) # tom/admin@example.COM dlc

    it is tremendously suggested that the kadm5.acl file subsist tightly controlled and that users subsist granted best the privileges they requisite to function their assigned tasks.

    creating Host Keys

    developing host keys for systems within the realm comparable to slave KDCs is carried out the identical approach that growing person principals is performed. besides the fact that children, the -randkey option should silent always subsist used, so no one ever is cognizant of the genuine key for the hosts. Host principals are nearly always stored within the keytab file, to subsist used by using root-owned methods that wish to act as Kerberos functions for the native host. it is hardly ever necessary for any individual to in fact comprehend the password for a host major because the key's stored safely within the keytab and is just attainable by root-owned procedures, never by artery of precise users.

    When growing keytab files, the keys should shameful the time subsist extracted from the KDC on the identical machine where the keytab is to dwell the expend of the ktadd command from a kadmin session. If here's not feasible, pick splendid supervision in transferring the keytab file from one computing device to the next. A malicious attacker who possesses the contents of the keytab file may expend these keys from the file with the aim to profit access to an extra user or features credentials. Having the keys would then allow the attacker to impersonate whatever considerable that the considerable thing represented and further compromise the protection of that Kerberos realm. Some guidance for transferring the keytab are to expend Kerberized, encrypted ftp transfers, or to expend the relaxed file transfer classes scp or sftp provided with the SSH paraphernalia ( another protected components is to spot the keytab on a detachable disk, and hand-bring it to the destination.

    Hand birth does not scale well for mammoth installations, so the usage of the Kerberized ftp daemon is possibly essentially the most facile and secure system purchasable.

    the expend of NTP to Synchronize Clocks

    All servers participating in the Kerberos realm should fill their system clocks synchronized to within a configurable closing date (default 300 seconds). The most secure, most snug approach to systematically synchronize the clocks on a network of Kerberos servers is by using the network Time Protocol (NTP) service. The Solaris OE comes with an NTP customer and NTP server software (SUNWntpu equipment). descry the ntpdate(1M) and xntpd(1M) man pages for more information on the particular person instructions. For greater counsel on configuring NTP, discuss with here sun BluePrints online NTP articles:

    it is captious that the time subsist synchronized in a at ease method. a simple denial of service assault on both a shopper or a server would accommodate just skewing the time on that paraphernalia to subsist outdoor of the configured clock skew value, which would then preserve away from any individual from acquiring TGTs from that system or accessing Kerberized capabilities on that equipment. The default clock-skew value of 5 minutes is the highest informed value.

    The NTP infrastructure ought to even subsist secured, together with using server hardening for the NTP server and application of NTP security elements. the usage of the Solaris safety Toolkit software (previously referred to as JASS) with the at ease.driver script to create a minimal paraphernalia after which installation simply the crucial NTP utility is one such components. The Solaris protection Toolkit application is accessible at:

    Documentation on the Solaris safety Toolkit utility is available at:

    organising Password policies

    Kerberos allows for the administrator to define password policies that can subsist applied to a brace or the entire user principals in the realm. A password policy incorporates definitions for right here parameters:

  • minimal Password size – The number of characters in the password, for which the suggested cost is 8.

  • highest Password courses – The variety of different character classes that ought to subsist used to beget up the password. Letters, numbers, and punctuation are the three classes and legitimate values are 1, 2, and 3. The counseled cost is 2.

  • Saved Password background – The variety of ancient passwords which fill been used through the primary that can't subsist reused. The counseled value is three.

  • minimal Password Lifetime (seconds) – The minimum time that the password should subsist used earlier than it may besides subsist changed. The counseled cost is 3600 (1 hour).

  • optimum Password Lifetime (seconds) – The highest time that the password can besides subsist used earlier than it fill to subsist modified. The counseled cost is 7776000 (ninety days).

  • These values can besides subsist set as a bunch and kept as a lone policy. several guidelines will besides subsist defined for several principals. it is suggested that the minimal password size subsist set to at the least 8 and that at the least 2 courses subsist required. Most people are inclined to opt for easy-to-be cognizant and simple-to-classification passwords, so it is a splendid belief to at the least deploy policies to motivate just a shrimp extra difficult-to-bet passwords through the expend of these parameters. surroundings the maximum Password Lifetime cost can subsist advantageous in some environments, to constrain people to alternate their passwords periodically. The duration is as much as the native administrator in keeping with the overriding company security policy used at that particular site. environment the Saved Password historical past cost combined with the minimum Password Lifetime value prevents people from easily switching their password a brace of times unless they pickup lower back to their fashioned or favorite password.

    The optimum password length supported is 255 characters, unlike the UNIX password database which most effective supports up to 8 characters. Passwords are stored in the KDC encrypted database using the KDC default encryption components, DES-CBC-CRC. with the aim to evade password guessing assaults, it is counseled that clients opt for long passwords or dart phrases. The 255 personality restrict allows for one to select a diminutive sentence or facile to subsist cognizant phrase as an alternative of an facile one-be cognizant password.

    it's workable to beget expend of a dictionary file that can subsist used to steer transparent of clients from determining general, effortless-to-bet phrases (see “secure Settings in the KDC Configuration File” on page 70). The dictionary file is simply used when a major has a policy affiliation, so it is tremendously advised that at the least one coverage subsist in repercussion for shameful principals in the realm.

    here is an example password coverage advent:

    if you specify a kadmin command without specifying any options, kadmin shows the syntax (usage tips) for that command. the following code box indicates this, followed through an exact add_policy command with options.

    kadmin: add_policy usage: add_policy [options] coverage alternatives are: [-maxlife time] [-minlife time] [-minlength length] [-minclasses number] [-history number] kadmin: add_policy -minlife "1 hour" -maxlife "90 days" -minlength eight -minclasses 2 -heritage three passpolicy kadmin: get_policy passpolicy coverage: passpolicy maximum password existence: 7776000 minimal password existence: 3600 minimal password length: 8 minimum variety of password personality classes: 2 number of ancient keys stored: three Reference count: 0

    This illustration creates a password policy known as passpolicy which enforces a highest password lifetime of ninety days, minimal size of 8 characters, not less than 2 diverse personality courses (letters, numbers, punctuation), and a password legacy of 3.

    To rehearse this coverage to an current person, modify the following:

    kadmin: modprinc -policy passpolicy lucyPrincipal "lucy@illustration.COM" modified.

    To modify the default policy it really is utilized to shameful consumer principals in a realm, alternate the following:

    kadmin: modify_policy -maxlife "90 days" -minlife "1 hour" -minlength eight -minclasses 2 -historical past 3 default kadmin: get_policy default policy: default maximum password existence: 7776000 minimum password life: 3600 minimal password length: eight minimal variety of password personality courses: 2 variety of ancient keys saved: three Reference signify number: 1

    The Reference signify number value shows how many principals are configured to beget expend of the policy.

    The default policy is immediately applied to shameful unusual principals that are not given the identical password because the essential appellation when they're created. Any account with a coverage assigned to it's uses the dictionary (defined in the dict_file parameter in /and so forth/krb5/kdc.conf) to determine for typical passwords.

    Backing Up a KDC

    Backups of a KDC paraphernalia may silent subsist made constantly or in keeping with native policy. besides the fact that children, backups should silent exclude the /and so forth/krb5/krb5.keytab file. If the local coverage requires that backups subsist performed over a community, then these backups should silent subsist secured either through the expend of encryption or probably through the expend of a separate community interface that is simply used for backup applications and isn't exposed to the identical traffic because the non-backup community traffic. Backup storage media should at shameful times subsist kept in a comfortable, fireproof region.

    Monitoring the KDC

    as soon as the KDC is configured and operating, it can subsist at shameful times and vigilantly monitored. The solar Kerberos v5 utility KDC logs guidance into the /var/krb5/kdc.log file, however this area can besides subsist modified in the /and so on/krb5/krb5.conf file, in the logging area.

    [logging] default = FILE:/var/krb5/kdc.log kdc = FILE:/var/krb5/kdc.log

    The KDC log file should fill read and write permissions for the basis user most effective, as follows:

    -rw------ 1 root different 750 25 might besides 10 17:fifty five /var/krb5/kdc.log Kerberos alternate options

    The /and so forth/krb5/krb5.conf file carries counsel that every one Kerberos purposes expend to assess what server to quest advice from and what realm they're taking piece in. Configuring the krb5.conf file is coated within the sun enterprise Authentication Mechanism application installation book. besides check with the krb5.conf(four) man page for a complete description of this file.

    The appdefaults piece in the krb5.conf file consists of parameters that maneuver the conduct of many Kerberos customer tools. every tool may additionally fill its personal piece within the appdefaults piece of the krb5.conf file.

    many of the purposes that expend the appdefaults section, expend the identical alternate options; youngsters, they may subsist set in other ways for each and every customer utility.

    Kerberos client functions

    the following Kerberos purposes can fill their conduct modified throughout the consumer of alternatives set in the appdefaults piece of the /and so on/krb5/krb5.conf file or through the expend of numerous command-line arguments. These purchasers and their configuration settings are described under.


    The kinit customer is used with the aid of individuals who want to gather a TGT from the KDC. The /and so forth/krb5/krb5.conf file supports right here kinit alternatives: renewable, forwardable, no_addresses, max_life, max_renewable_life and proxiable.


    The Kerberos telnet client has many command-line arguments that manage its behavior. quest advice from the man page for finished counsel. despite the fact, there are a brace of wonderful protection concerns involving the Kerberized telnet client.

    The telnet customer makes expend of a session key even after the provider ticket which it changed into derived from has expired. This means that the telnet session remains active even after the ticket initially used to profit access, isn't any longer legitimate. this is insecure in a strict environment, despite the fact, the change off between ease of expend and strict security tends to spare in want of ease-of-use in this condition. it is counseled that the telnet connection subsist re-initialized periodically by disconnecting and reconnecting with a unusual ticket. The overall lifetime of a ticket is described by using the KDC (/etc/krb5/kdc.conf), invariably defined as eight hours.

    The telnet customer allows the consumer to forward a replica of the credentials (TGT) used to authenticate to the remote system the expend of the -f and -F command-line options. The -f alternative sends a non-forwardable reproduction of the native TGT to the remote system in order that the user can access Kerberized NFS mounts or other native Kerberized capabilities on that gadget best. The -F option sends a forwardable TGT to the far off device so that the TGT will besides subsist used from the far flung system to gain further access to other far flung Kerberos functions past that aspect. The -F option is a superset of -f. If the Forwardable and or forward alternatives are set to erroneous in the krb5.conf file, these command-line arguments will besides subsist used to override those settings, for this judgement giving individuals the control over no matter if and how their credentials are forwarded.

    The -x alternative should subsist used to switch on encryption for the statistics move. This further protects the session from eavesdroppers. If the telnet server does not aid encryption, the session is closed. The /and many others/krb5/krb5.conf file helps right here telnet alternate options: forward, forwardable, encrypt, and autologin. The autologin [true/false] parameter tells the customer to try and try to log in with out prompting the person for a person name. The native user appellation is handed on to the far off gadget in the telnet negotiations.

    rlogin and rsh

    The Kerberos rlogin and rsh consumers behave tons the very as their non-Kerberized equivalents. as a result of this, it's counseled that in the event that they are required to subsist included within the community data reminiscent of /etc/hosts.equiv and .rhosts that the basis clients directory subsist eliminated. The Kerberized types fill the additional handicap of the usage of Kerberos protocol for authentication and may besides expend Kerberos to give protection to the privacy of the session using encryption.

    similar to telnet described prior to now, the rlogin and rsh clients expend a session key after the service ticket which it was derived from has expired. for that reason, for max protection, rlogin and rsh classes may silent subsist re-initialized periodically. rlogin uses the -f, -F, and -x alternatives in the very vogue because the telnet client. The /and so forth/krb5/krb5.conf file helps the following rlogin alternatives: ahead, forwardable, and encrypt.

    Command-line alternate options override configuration file settings. for example, if the rsh section within the krb5.conf file shows encrypt false, however the -x election is used on the command line, an encrypted session is used.


    Kerberized rcp will besides subsist used to transfer info securely between methods using Kerberos authentication and encryption (with the -x command-line option). It doesn't instant for passwords, the consumer fill to fill already got a valid TGT before the expend of rcp in the event that they wish to expend the encryption feature. despite the fact, pay attention if the -x option is not used and no native credentials are available, the rcp session will revert to the general, non-Kerberized (and insecure) rcp behavior. it's enormously advised that clients always expend the -x alternative when the expend of the Kerberized rcp customer.The /etc/krb5/krb5.conf file supports the encrypt [true/false] alternative.


    The Kerberos login program (login.krb5) is forked from a successful authentication by using the Kerberized telnet daemon or the Kerberized rlogin daemon. This Kerberos login daemon is fracture away the everyday Solaris OE login daemon and consequently, the common Solaris OE aspects similar to BSM auditing don't seem to subsist yet supported when using this daemon. The /and many others/krb5/krb5.conf file helps the krb5_get_tickets [true/false] option. If this option is decided to genuine, then the login application will generate a unusual Kerberos ticket (TGT) for the consumer upon proper authentication.


    The sun enterprise Authentication Mechanism (SEAM) version of the ftp customer uses the GSSAPI (RFC 2743) with Kerberos v5 because the default mechanism. This aptitude that it uses Kerberos authentication and (optionally) encryption during the Kerberos v5 GSS mechanism. The simplest Kerberos-related command-line alternatives are -f and -m. The -f alternative is an identical as described above for telnet (there is not any want for a -F alternative). -m allows the person to specify an election GSS mechanism if so desired, the default is to expend the kerberos_v5 mechanism.

    The coverage level used for the statistics switch can subsist set the usage of the protect command at the ftp on the spot. solar business Authentication Mechanism software ftp supports here insurance policy stages:

  • Clear unprotected, unencrypted transmission

  • safe records is integrity blanketed using cryptographic checksums

  • private facts is transmitted with confidentiality and integrity using encryption

  • it is counseled that clients set the protection degree to private for shameful statistics transfers. The ftp client program doesn't aid or reference the krb5.conf file to locate any not obligatory parameters. shameful ftp client options are passed on the command line. descry the person page for the Kerberized ftp client, ftp(1).

    In abstract, including Kerberos to a network can increase the overall safety purchasable to the clients and directors of that network. faraway periods can besides subsist securely authenticated and encrypted, and shared disks will besides subsist secured and encrypted across the community. additionally, Kerberos allows the database of user and repair principals to subsist managed securely from any desktop which supports the SEAM utility Kerberos protocol. SEAM is interoperable with different RFC 1510 compliant Kerberos implementations such as MIT Krb5 and a few MS windows 2000 energetic directory functions. Adopting the practices advised during this piece further comfy the SEAM software infrastructure to back beget inescapable a safer community ambiance.

    implementing the solar ONE listing Server 5.2 application and the GSSAPI Mechanism

    This piece gives a high-level overview, followed via the in-depth techniques that recount the setup fundamental to setaside in constrain the GSSAPI mechanism and the sun ONE directory Server 5.2 software. This implementation assumes a realm of instance.COM for this aim. here list offers an preliminary excessive-level overview of the steps required, with the next area featuring the minute counsel.

  • Setup DNS on the client desktop. here's an considerable step because Kerberos requires DNS.

  • installation and configure the solar ONE directory Server edition 5.2 software.

  • assess that the directory server and client both fill the SASL plug-ins installed.

  • installation and configure Kerberos v5.

  • Edit the /etc/krb5/krb5.conf file.

  • Edit the /and so forth/krb5/kdc.conf file.

  • Edit the /etc/krb5/kadm5.acl file.

  • movement the kerberos_v5 line so it is the first line in the /and many others/gss/mech file.

  • Create unusual principals using kadmin.native, which is an interactive commandline interface to the Kerberos v5 administration device.

  • alter the rights for /etc/krb5/krb5.keytab. This entry is integral for the sun ONE directory Server 5.2 utility.

  • Run /usr/sbin/kinit.

  • determine that you fill a ticket with /usr/bin/klist.

  • function an ldapsearch, using the ldapsearch command-line device from the solar ONE listing Server 5.2 application to check and assess.

  • The sections that comply with fill in the details.

    Configuring a DNS client

    To subsist a DNS customer, a machine fill to rush the resolver. The resolver is neither a daemon nor a lone software. it is a collection of dynamic library routines used through applications that should understand laptop names. The resolver’s feature is to resolve users’ queries. To try this, it queries a appellation server, which then returns either the requested suggestions or a referral to another server. as soon as the resolver is configured, a computer can request DNS provider from a reputation server.

    right here example indicates you the artery to configure the resolv.conf(four) file in the server kdc1 in the area.

    ; ; /and so on/resolv.conf file for dnsmaster ; area nameserver nameserver

    the first line of the /and many others/resolv.conf file lists the domain appellation in the form:

    area domainname

    No spaces or tabs are approved at the cease of the area identify. subsist inescapable that you just press return immediately after the closing persona of the domain name.

    The second line identifies the server itself within the kind:

    nameserver IP_address

    Succeeding traces listing the IP addresses of one or two slave or cache-handiest identify servers that the resolver should silent check with to pickup to the bottom of queries. identify server entries fill the form:

    nameserver IP_address

    IP_address is the IP tackle of a slave or cache-best DNS identify server. The resolver queries these identify servers within the order they're listed until it obtains the guidance it needs.

    For more exact suggestions of what the resolv.conf file does, quest advice from the resolv.conf(4) man page.

    To Configure Kerberos v5 (grasp KDC)

    in the this procedure, the following configuration parameters are used:

  • Realm appellation = example.COM

  • DNS domain identify =

  • master KDC =

  • admin principal = lucy/admin

  • on-line encourage URL = http://example:8888/ab2/coll.384.1/SEAM/@AB2PageView/6956

  • This process requires that DNS is operating.

    earlier than you start this configuration process, beget a backup of the /etc/krb5 data.

  • turn into superuser on the grasp KDC. (kdc1, during this instance)

  • Edit the Kerberos configuration file (krb5.conf).

    You requisite to exchange the realm names and the names of the servers. descry the krb5.conf(4) man web page for a complete description of this file.

    kdc1 # extra /and many others/krb5/krb5.conf [libdefaults] default_realm = instance.COM [realms] instance.COM = kdc = admin server = [domain_realm] = example.COM [logging] default = FILE:/var/krb5/kdc.log kdc = FILE:/var/krb5/kdc.log [appdefaults] gkadmin = help_url = http://instance:8888/ab2/coll.384.1/SEAM/@AB2PageView/6956

    in this illustration, the strains for domain_realm, kdc, admin_server, and shameful domain_realm entries were modified. in addition, the line with ___slave_kdcs___ within the [realms] area became deleted and the road that defines the help_url become edited.

  • Edit the KDC configuration file (kdc.conf).

    You requisite to change the realm identify. descry the kdc.conf( four) man web page for a complete description of this file.

    kdc1 # extra /and many others/krb5/kdc.conf [kdcdefaults] kdc_ports = 88,750 [realms] example.COM= profile = /and so forth/krb5/krb5.conf database_name = /var/krb5/primary admin_keytab = /and many others/krb5/kadm5.keytab acl_file = /and so on/krb5/kadm5.acl kadmind_port = 749 max_life = 8h 0m 0s max_renewable_life = 7d 0h 0m 0s requisite touching ---------> default_principal_flags = +preauth

    in this illustration, most effective the realm identify definition within the [realms] section is changed.

  • Create the KDC database through the expend of the kdb5_util command.

    The kdb5_util command, which is observed in /usr/sbin, creates the KDC database. When used with the -s alternative, this command creates a stash file that is used to authenticate the KDC to itself earlier than the kadmind and krb5kdc daemons are shameful started.

    kdc1 # /usr/sbin/kdb5_util create -r illustration.COM -s Initializing database '/var/krb5/foremost' for realm 'illustration.COM' master key identify 'k/M@instance.COM' You might subsist precipitated for the database master Password. it is vital that you not overlook this password. Enter KDC database master key: key Re-enter KDC database master key to investigate: key

    The -r election adopted with the aid of the realm appellation is not required if the realm appellation is akin to the area identify in the server’s identify space.

  • Edit the Kerberos access maneuver record file (kadm5.acl).

    as soon as populated, the /and many others/krb5/kadm5.acl file incorporates shameful most considerable names which are allowed to administer the KDC. the first entry it really is added could seem corresponding to here:

    lucy/admin@instance.COM *

    This entry offers the lucy/admin most considerable in the instance.COM realm the skill to regulate principals or policies within the KDC. The default installing contains an asterisk (*) to suit shameful admin principals. This default could subsist a safety possibility, so it is more at ease to consist of a list of the entire admin principals. descry the kadm5.acl(4) man web page for greater tips.

  • Edit the /and so forth/gss/mech file.

    The /and so forth/gss/mech file includes the GSSAPI primarily based safety mechanism names, its remonstrate identifier (OID), and a shared library that implements the functions for that mechanism below the GSSAPI. change the following from:

    # Mechanism identify remonstrate Identifier Shared Library Kernel Module # diffie_hellman_640_0 1.three.6.four. diffie_hellman_1024_0 1.three. kerberos_v5 1.2.840.113554.1.2.2 gl/ gl_kmech_krb5

    To the following:

    # Mechanism appellation remonstrate Identifier Shared Library Kernel Module # kerberos_v5 1.2.840.113554.1.2.2 gl/ gl_kmech_krb5 diffie_hellman_640_0 1.three.6.four. diffie_hellman_1024_0 1.three.6.4.1.forty two.
  • Run the kadmin.local command to create principals.

    which you could add as many admin principals as you need. however you fill to add at the least one admin considerable to comprehensive the KDC configuration method. In right here illustration, lucy/admin is added as the predominant.

    kdc1 # /usr/sbin/kadmin.native kadmin.local: addprinc lucy/admin Enter password for principal "lucy/admin@example.COM": Re-enter password for predominant "lucy/admin@illustration.COM": fundamental "lucy/admin@example.COM" created. kadmin.native:
  • Create a keytab file for the kadmind carrier.

    here command sequence creates a special keytab file with essential entries for lucy and tom. These principals are obligatory for the kadmind service. furthermore, that you would subsist able to optionally add NFS carrier principals, host principals, LDAP principals, and the like.

    When the principal example is a bunch identify, the wholly qualified area appellation (FQDN) requisite to subsist entered in lowercase letters, even with the case of the area appellation in the /and so forth/resolv.conf file.

    kadmin.native: ktadd -k /and so forth/krb5/kadm5.keytab kadmin/ Entry for fundamental kadmin/ with kvno 3, encryption classification DES-CBC-CRC added to keytab WRFILE:/and so forth/krb5/kadm5.keytab. kadmin.native: ktadd -k /and many others/krb5/kadm5.keytab changepw/ Entry for considerable changepw/ with kvno three, encryption category DES-CBC-CRC brought to keytab WRFILE:/etc/krb5/kadm5.keytab. kadmin.native:

    after you fill delivered shameful of the required principals, which you can exit from kadmin.native as follows:

    kadmin.local: give up
  • delivery the Kerberos daemons as proven:

    kdc1 # /and so on/init.d/kdc delivery kdc1 # /and so on/init.d/kdc.master birth


    You discontinue the Kerberos daemons by means of running the following instructions:

    kdc1 # /and so forth/init.d/kdc stop kdc1 # /etc/init.d/kdc.master stop
  • Add principals through the expend of the SEAM Administration tool.

    To try this, you should vanish online with some of the admin essential names that you simply created earlier during this procedure. however, here command-line illustration is shown for simplicity.

    kdc1 # /usr/sbin/kadmin -p lucy/admin Enter password: kws_admin_password kadmin:
  • Create the grasp KDC host primary which is used through Kerberized applications reminiscent of klist and kprop.

    kadmin: addprinc -randkey host/ foremost "host/" created. kadmin:
  • (optional) Create the master KDC root primary which is used for authenticated NFS mounting.

    kadmin: addprinc root/ Enter password for most considerable root/ password Re-enter password for major root/ password main "root/" created. kadmin:
  • Add the master KDC’s host predominant to the master KDC’s keytab file which enables this primary for expend immediately.

    kadmin: ktadd host/ kadmin: Entry for principal host/ with ->kvno 3, encryption class DES-CBC-CRC introduced to keytab ->WRFILE:/etc/krb5/krb5.keytab kadmin:

    after getting added the entire required principals, which you can exit from kadmin as follows:

    kadmin: stop
  • Run the kinit command to achieve and cache an initial ticket-granting ticket (credential) for the fundamental.

    This ticket is used for authentication through the Kerberos v5 equipment. kinit best needs to subsist rush by using the customer at the moment. If the solar ONE directory server had been a Kerberos customer additionally, this step would should subsist finished for the server. despite the fact, you might besides want to expend this to check that Kerberos is up and running.

    kdclient # /usr/bin/kinit root/ Password for root/ passwd
  • verify and check that you've a ticket with the klist command.

    The klist command reviews if there is a keytab file and shows the principals. If the results expose that there isn't any keytab file or that there is not any NFS provider major, you should determine the completion of shameful of the previous steps.

    # klist -ok Keytab identify: FILE:/and so on/krb5/krb5.keytab KVNO principal ---- ------------------------------------------------------------------ 3 nfs/

    The instance given right here assumes a lone domain. The KDC can besides reside on the very machine because the sun ONE listing server for trying out functions, however there are protection issues to fill in intellect on the spot the KDCs live.

  • concerning the configuration of Kerberos v5 together with the sun ONE directory Server 5.2 application, you're complete with the Kerberos v5 half. It’s now time to examine what is required to subsist configured on the sun ONE directory server facet.

    sun ONE listing Server 5.2 GSSAPI Configuration

    As prior to now discussed, the widespread safety services application program Interface (GSSAPI), is benchmark interface that makes it workable for you to expend a safety mechanism such as Kerberos v5 to authenticate consumers. The server uses the GSSAPI to definitely validate the identity of a particular consumer. once this user is validated, it’s up to the SASL mechanism to supervene the GSSAPI mapping suggestions to achieve a DN that is the bind DN for shameful operations shameful over the connection.

    the primary merchandise discussed is the brand unusual identification mapping performance.

    The identity mapping carrier is required to map the credentials of a different protocol, such as SASL DIGEST-MD5 and GSSAPI to a DN in the listing server. As you are going to descry in right here instance, the id mapping feature makes expend of the entries in the cn=identification mapping, cn=config configuration branch, whereby each and every protocol is defined and whereby every protocol must function the id mapping. For extra suggestions on the identity mapping function, quest advice from the solar ONE directory Server 5.2 files.

    To function the GSSAPI Configuration for the sun ONE directory Server software
  • assess and verify, by retrieving the rootDSE entry, that the GSSAPI is back as one of the supported SASL Mechanisms.

    instance of the usage of ldapsearch to retrieve the rootDSE and pickup the supported SASL mechanisms:

    $./ldapsearch -h directoryserver_hostname -p ldap_port -b "" -s basis "(objectclass=*)" supportedSASLMechanisms supportedSASLMechanisms=external supportedSASLMechanisms=GSSAPI supportedSASLMechanisms=DIGEST-MD5
  • check that the GSSAPI mechanism is enabled.

    by artery of default, the GSSAPI mechanism is enabled.

    example of the usage of ldapsearch to assess that the GSSAPI SASL mechanism is enabled:

    $./ldapsearch -h directoryserver_hostname -p ldap_port -D"cn=listing manager" -w password -b "cn=SASL, cn=protection,cn= config" "(objectclass=*)" # # should return # cn=SASL, cn=safety, cn=config objectClass=properly objectClass=nsContainer objectClass=dsSaslConfig cn=SASL dsSaslPluginsPath=/var/solar/mps/lib/sasl dsSaslPluginsEnable=DIGEST-MD5 dsSaslPluginsEnable=GSSAPI
  • Create and add the GSSAPI identity-mapping.ldif.

    Add the LDIF shown below to the sun ONE listing Server so that it carries the proper suffix in your directory server.

    You deserve to try this because by artery of default, no GSSAPI mappings are described in the sun ONE listing Server 5.2 software.

    instance of a GSSAPI identity mapping LDIF file:

    # dn: cn=GSSAPI,cn=id mapping,cn=config objectclass: nsContainer objectclass: bestcn: GSSAPI dn: cn=default,cn=GSSAPI,cn=id mapping,cn=config objectclass: dsIdentityMapping objectclass: nsContainer objectclass: idealcn: default dsMappedDN: uid=$foremost,ou=people,dc=illustration,dc=com dn: cn=same_realm,cn=GSSAPI,cn=identification mapping,cn=config objectclass: dsIdentityMapping objectclass: dsPatternMatching objectclass: nsContainer objectclass: topcn: same_realm dsMatching-pattern: $fundamental dsMatching-regexp: (.*) dsMappedDN: uid=$1,ou=people,dc=illustration,dc=com

    it is essential to beget expend of the $fundamental variable, because it is the simplest enter you fill from SASL within the case of GSSAPI. either you deserve to build a dn using the $foremost variable otherwise you deserve to operate sample matching to peer if you can apply a specific mapping. A main corresponds to the identity of a person in Kerberos.

    you can locate an instance GSSAPI LDIF mappings files in ServerRoot/slapdserver/ldif/identityMapping_Examples.ldif.

    here is an example the usage of ldapmodify to try this:

    $./ldapmodify -a -c -h directoryserver_hostname -p ldap_port -D "cn=listing supervisor" -w password -f id-mapping.ldif -e /var/tmp/ldif.rejects 2> /var/tmp/ldapmodify.log
  • function a check using ldapsearch.

    To accomplish this verify, class here ldapsearch command as proven beneath, and reply the prompt with the kinit cost you prior to now defined.

    example of the expend of ldapsearch to examine the GSSAPI mechanism:

    $./ldapsearch -h directoryserver_hostname -p ldap_port -o mech=GSSAPI -o authzid="root/hostname.domainname@instance.COM" -b "" -s basis "(objectclass=*)"

    The output it is lower back may silent subsist the identical as without the -o choice.

    in case you accomplish not expend the -h hostname choice, the GSS code finally ends up attempting to find a localhost.domainname Kerberos ticket, and an mistake occurs.

  • HP reports 'incredibly critical' Tru64 flaws | actual Questions and Pass4sure dumps

    Edmund X. DeJesus, Contributor

    Hewlett-Packard Co. is warning Tru64 administrators of "enormously crucial" vulnerabilities that could lead on to native or faraway unauthorized gadget access or denial of provider. HP has launched patches for each flaws.

    HP has declined to specify the character of the vulnerabilities, except to asseverate that they are in HP's implementation of IPSec and SSH.

    The areas of the vulnerabilities are ironic, in that each IPSec and SSH are supposititious to give safety aspects to operating techniques. IPSec is used to create encrypted, relaxed VPN tunnels for passing guidance between IP-primarily based techniques. SSH (relaxed Shell) presents comfy versions of network instructions including rsh, rlogin and rcp, and functions such as telnet and ftp. clients frequently beget expend of SSH to log-in to and execute commands on far off computer systems securely, in addition to set up secure communications between two computers.

    Affected types of HP Tru64 UNIX comprehend V5.1B PK2 (BL22) and PK3 (BL24), and V5.1A running IPSec and SSH utility kits sooner than IPSec 2.1.1 and SSH three.2.2. The vulnerabilities aren't current in IPSec version 2.1.1 and SSH version three.2.2.

    HP Tru64 UNIX, which runs on the inherited AlphaServer line, is in the manner of being changed by means of HP-UX. Tru64 has exhibited vulnerability issues before, including privilege escalation, denial of service and selected considerations with SSH in August 2003.

    FOR greater counsel:

    down load IPSec patch

    download SSH patch

    Microsoft teams with CyberSafe to beget W2K Kerberos Interoperable | actual Questions and Pass4sure dumps


    Microsoft teams with CyberSafe to beget W2K Kerberos Interoperable
  • with the aid of Scott Bekker
  • 01/17/2000
  • Microsoft Corp. and CyberSafe Corp. ( ) nowadays introduced they fill collaborated to lengthen windows 2000-Kerberos interoperability to business shoppers working combined-system environments.

    Kerberos v5 is an business-typical network authentication protocol, designed on the Massachusetts Institute of know-how to deliver "proof of identification" on the community. Kerberos v5 is a local function of home windows 2000 and will subsist shipped as piece of the working paraphernalia to give relaxed, interoperable community authentication services to IT gurus.

    in line with Microsoft, interoperability between windows 2000 and ActiveTRUST from CyberSafe provides commercial enterprise purchasers with secured communications and facts transfers, available only by Kerberos validation; seamless interoperability with CyberSafe-supported structures, together with Solaris, HP-UX, AIX, Tru64, OS/390, windows 9x and windows NT; and lone signal-on entry to shameful community substances.

    Keith White, director of windows advertising and marketing at Microsoft, says this announcement is a component of Microsoft’s exertion to interoperate with different software platforms, and to back open necessities.

    Microsoft and CyberSafe fill compiled their examine effects in an in depth Kerberos implementation paper primarily for heterogeneous environments. "Kerberos Interoperability: Microsoft home windows 2000 and CyberSafe ActiveTRUST" is accessible at RSA convention 2000 in San Jose, Calif., and shortly might subsist attainable on the CyberSafe net web site. – Thomas Sullivan

    in regards to the creator

    Scott Bekker is editor in chief of Redmond Channel accomplice journal.

    Obviously it is difficult assignment to pick solid certification questions/answers assets concerning review, reputation and validity since individuals pickup sham because of picking incorrectly benefit. ensure to serve its customers best to its assets concerning exam dumps update and validity. The vast majority of other's sham report objection customers arrive to us for the brain dumps and pass their exams cheerfully and effectively. They never trade off on their review, reputation and attribute because killexams review, killexams reputation and killexams customer conviction is vital to us. Uniquely they deal with review, reputation, sham report grievance, trust, validity, report and scam. In the event that you descry any erroneous report posted by their rivals with the appellation killexams sham report grievance web, sham report, scam, dissension or something enjoy this, simply recall there are constantly terrible individuals harming reputation of splendid administrations because of their advantages. There are a mighty many fulfilled clients that pass their exams utilizing brain dumps, killexams PDF questions, killexams hone questions, killexams exam simulator. Visit, their specimen questions and test brain dumps, their exam simulator and you will realize that is the best brain dumps site.

    Back to Brain dumps Menu

    C2140-820 study guide | 190-847 brain dumps | 70-542-VB bootcamp | 400-351 exam prep | ENOV613X-3DE examcollection | 000-793 sample test | 000-M222 actual questions | C2010-650 questions and answers | C9550-273 free pdf | JN0-561 actual questions | 500-451 pdf download | JN0-141 free pdf | NBDE-II test prep | MSNCB study guide | 9L0-004 rehearse test | A2150-537 rehearse test | HP0-M19 dumps | 500-260 braindumps | VCAW510 exam prep | I10-002 study guide |

    Once you memorize these HP0-704 actual questions , you will pickup 100% marks. HP Certification is vital in career oportunities. Lots of students had been complaining that there are too many questions in such a lot of rehearse assessments and exam guides, and they are just worn-out to fill enough money any more. Seeing professionals travail out this comprehensive version of brain dumps with actual questions at the very time as nonetheless assure that just memorizing these actual questions, you will pass your exam with splendid marks.

    If you are inquisitive about effectively Passing the HP HP0-704 exam to open earning? has leading aspect developed TruCluster v5 Implementation and back test questions thus one will substantiate you pass this HP0-704 exam! offers you the most correct, recent and updated HP0-704 exam questions and out there with a 100% refund assure guarantee. There are several organizations that present HP0-704 brain dumps however those are not revise and revise ones. Preparation with HP0-704 unusual questions will subsist a superior manner to pass HP0-704 certification exam in towering marks. Discount Coupons and Promo Codes are as underneath; WC2017 : 60% Discount Coupon for shameful tests on website PROF17 : 10% Discount Coupon for Orders larger than $69 DEAL17 : 15% Discount Coupon for Orders over $99 SEPSPECIAL : 10% Special Discount Coupon for shameful Orders We are shameful cognizant that a main concern within the IT business is there's a loss of fantastic braindumps. Their test preparation dumps provides you everything you will requisite to require a certification test. Their HP HP0-704 exam offers you with test questions with established answers that replicate the considerable test. These Questions and Answers provide you with assurance of taking the considerable exam. 100 percent guarantee to pass your HP HP0-704 exam and acquire your HP certification. they fill a current at are devoted that will assist you pass your HP0-704 exam with towering score. the chances of you failing your HP0-704 exam, once memorizing their comprehensive test dumps are little.

    At, they provide absolutely studied HP HP0-704 getting ready sources which are the pleasant to pass HP0-704 exam, and to pickup asserted by artery of HP. It is a fine election to animate your employment as a specialist in the Information Technology industry. They are joyous with their reputation of supporting people pass the HP0-704 exam of their first undertakings. Their thriving fees inside the beyond two years fill been absolutely extraordinary, because of their cheery clients who are currently prepared to result in their livelihoods in the maximum optimized contrivance of assault. is the primary election among IT specialists, in particular those who're making plans to climb the movement ranges faster in their individual affiliations. HP is the commercial enterprise pioneer in information development, and getting avowed by them is a assured artery to cope with win with IT jobs. They empower you to accomplish efficaciously that with their notable HP HP0-704 getting ready materials.

    HP HP0-704 is omnipresent shameful around the international, and the business and programming publications of action gave by means of them are being gotten a manage on by artery of every one of the associations. They fill helped in using an in depth quantity of associations on the with out question shot fashion for success. Expansive mastering of HP matters are seen as a basic ability, and the experts confirmed through them are uncommonly seemed in shameful affiliations.

    We provide unfeigned to goodness HP0-704 pdf exam question and answers braindumps in two plans. Download PDF and rehearse Tests. Pass HP HP0-704 Exam fleet and viably. The HP0-704 braindumps PDF kindhearted is to subsist had for inspecting and printing. You can print steadily and exercise usually. Their pass rate is towering to ninety eight.9% and the similarity fee among their HP0-704 syllabus preserve in intellect manage and certifiable exam is ninety% in mild of their seven-yr instructing basis. accomplish you require achievements inside the HP0-704 exam in just a unmarried undertaking? I am at the existing time analyzing for the HP HP0-704 actual exam.

    As the principle factor that is in any capacity captious here is passing the HP0-704 - TruCluster v5 Implementation and back exam. As shameful that you require is an extravagant rating of HP HP0-704 exam. The best a solitary constituent you requisite to accomplish is downloading braindumps of HP0-704 exam don't forget coordinates now. They will not can encourage you down with their unrestricted guarantee. The experts in enjoy manner preserve pace with the maximum best in grace exam to give maximum of updated materials. Three months loose access to fill the potential to them via the date of purchase. Every candidate may additionally endure the cost of the HP0-704 exam dumps thru requiring shrimp to no effort. Habitually there is a markdown for absolutely everyone all.

    Inside seeing the bona fide exam material of the brain dumps at you can with out a total lot of an amplify broaden your declare to repute. For the IT professionals, it's miles basic to enhance their capacities as showed with the aid of their travail need. They beget it fundamental for their customers to hold certification exam with the encourage of confirmed and unfeigned to goodness exam cloth. For an awesome destiny in its area, their brain dumps are the mighty decision.

    A mighty dumps growing is a basic segment that makes it trustworthy a splendid artery to pick HP certifications. In any case, HP0-704 braindumps PDF offers settlement for candidates. The IT declaration is a considerable tough attempt if one doesnt ascertain splendid course as plain resource material. Thus, we've got proper and updated material for the arranging of certification exam.

    It is essential to acquire to the manual material in case one wishes in the direction of shop time. As you require packs of time to peek for revived and splendid exam material for taking the IT certification exam. If you find that at one region, what may subsist higher than this? Its really that has what you require. You can rescue time and preserve a strategic distance from concern in case you purchase Adobe IT certification from their website.

    You requisite to pickup the maximum revived HP HP0-704 Braindumps with the actual answers, which can subsist set up by artery of professionals, empowering the likelihood to grasp finding out approximately their HP0-704 exam course inside the first-class, you will not locate HP0-704 outcomes of such satisfactory wherever within the marketplace. Their HP HP0-704 rehearse Dumps are given to applicants at acting 100% in their exam. Their HP HP0-704 exam dumps are modern day inside the market, permitting you to prepare on your HP0-704 exam in the proper manner.

    If you are possessed with viably Passing the HP HP0-704 exam to start obtaining? has riding area made HP exam has a current to so as to guarantee you pass this HP0-704 exam! passes on you the maximum correct, gift and cutting-edge revived HP0-704 exam questions and open with a 100% genuine assure ensure. There are severa institutions that provide HP0-704 brain dumps but the ones are not genuine and cutting-edge ones. Course of motion with HP0-704 unusual request is a most consummate artery to deal with pass this certification exam in primary manner. Huge Discount Coupons and Promo Codes are as below;
    WC2017 : 60% Discount Coupon for shameful exams on website
    PROF17 : 10% Discount Coupon for Orders extra than $69
    DEAL17 : 15% Discount Coupon for Orders extra than $ninety nine
    DECSPECIAL : 10% Special Discount Coupon for shameful Orders

    We are usually specially mindful that an imperative hardship within the IT business is that there is unavailability of mammoth well worth don't forget materials. Their exam preparation material gives shameful of you that you should pick an certification exam. Their HP HP0-704 Exam will give you exam question with confirmed answers that reflect the actual exam. These request and answers provide you with the revel in of taking the honest to goodness test. towering bore and impetus for the HP0-704 Exam. 100% confirmation to pass your HP HP0-704 exam and pickup your HP attestation. They at are made plans to empower you to pass your HP0-704 exam with extravagant ratings. The chances of you fail to pass your HP0-704 test, in the wake of encountering their sweeping exam dumps are for shameful intents and functions nothing.

    HP0-704 Practice Test | HP0-704 examcollection | HP0-704 VCE | HP0-704 study guide | HP0-704 practice exam | HP0-704 cram

    Killexams 1D0-520 examcollection | Killexams 190-952 actual questions | Killexams BE-100W brain dumps | Killexams MB2-717 sample test | Killexams FSDEV study guide | Killexams 640-461 dumps questions | Killexams 000-M32 test questions | Killexams BH0-002 exam prep | Killexams 250-318 braindumps | Killexams 1Z0-581 actual questions | Killexams 9L0-619 brain dumps | Killexams C2020-622 free pdf download | Killexams HP0-Y45 free pdf | Killexams C9510-317 exam questions | Killexams 650-026 rehearse questions | Killexams 1Z0-510 rehearse test | Killexams 00M-222 questions and answers | Killexams 98-382 questions answers | Killexams 1Z0-434 braindumps | Killexams P2040-060 actual questions |

    Exam Simulator : Pass4sure HP0-704 VCE Exam Simulator

    View Complete list of Brain dumps

    Killexams 9L0-004 test prep | Killexams EE0-065 study guide | Killexams HP2-N31 study guide | Killexams 000-Z04 braindumps | Killexams 312-50v9 free pdf | Killexams 1D0-437 rehearse questions | Killexams C9010-022 bootcamp | Killexams EC0-349 mock exam | Killexams P2070-092 rehearse Test | Killexams HP2-N57 test prep | Killexams HAT-450 free pdf | Killexams 0B0-108 actual questions | Killexams MD0-251 exam prep | Killexams RH-202 brain dumps | Killexams HP3-X02 questions and answers | Killexams 920-464 braindumps | Killexams 70-480 VCE | Killexams C9530-404 test questions | Killexams 212-77 actual questions | Killexams 1Z0-206 free pdf download |

    TruCluster v5 Implementation and Support

    Pass 4 positive HP0-704 dumps | HP0-704 actual questions |

    Breaking the Limits of Relational Databases: An Analysis of Cloud-Native Database Middleware: piece 1 | actual questions and Pass4sure dumps

    The progress and transformation of database technology are on the rise. NewSQL has emerged to combine various technologies, and the core functions implemented by the combination of these technologies has promoted the progress of the cloud-native database.

    This article provides insight into cloud-native database technology among the three types of NewSQL. The unusual architecture and Database-as-a-Service types involve many underlying implementations related to the database, and thus will not subsist elaborated here. This article focuses on the core functions and implementation principles of transparent sharding middleware. The core functions of the other two NewSQL types are similar to those of sharding middleware but fill different implementation principles.


    Regarding performance and availability, traditional solutions that store data on a lone data node in a centralized manner can no longer accommodate to the massive data scenarios created by the Internet. Most relational database products expend B+ tree indexes. When the data volume exceeds the threshold, the increase in the index depth leads to an increased disk I/O count, the substantially degrading query performance. In addition, highly concurrent access requests besides gyrate the centralized database into the biggest bottleneck of the system.

    Since traditional relational databases cannot meet the requirements of the Internet, increasing numbers of attempts fill been made to store data in NoSQL databases that natively back data distribution. However, NoSQL is not compatible with SQL Server, and its ecosystem has yet to subsist improved. Therefore, NoSQL cannot supersede relational databases, and the position of the relational databases is secure.

    Sharding refers to the distribution of the data stored in a lone database to multiple databases or tables based on a inescapable dimension to help the overall performance and availability. effective sharding measures comprehend database sharding and table sharding of relational databases. Both sharding methods can effectively prevent query bottlenecks caused by a huge data volume that exceeds the threshold.

    In addition, database sharding can effectively dispense the access requests of a lone database, while table sharding can metamorphose distributed transactions into local transactions whenever possible. The multi-master-and-multi-slave sharding fashion can effectively prevent the happening of single-points-of-data and enhance the availability of the data architecture.

    Vertical Sharding

    Vertical sharding is besides known as vertical partitioning. Its key belief is the expend of different databases for different purposes. Before sharding is performed, a database can consist of multiple data tables that correspond to different businesses. After sharding is performed, the tables are organized according to business and distributed to different databases, balancing the workload among different databases, as shown below:

    1Vertical sharding

    Horizontal Sharding

    Horizontal sharding is besides known as horizontal partitioning. In contrast to vertical sharding, horizontal sharding does not organize data by business logic. Instead, it distributes data to multiple databases or tables according to a rule of a specific field, and each shard contains only piece of the data.

    For example, if the ultimate digit of an ID mod 10 is 0, this ID is stored into database (table) 0; if the ultimate digit of an ID mod 10 is 1, this ID is stored into database (table) 1, as shown below:

    2Horizontal sharding

    Sharding is an effective solution to the performance problem of relational databases caused by massive data.

    In this solution, data on a lone node is split and stored into multiple databases or tables, that is, the data is sharded. Database sharding can effectively disperse the load on databases caused by highly concurrent access attempts. Although table sharding cannot mitigate the load of databases, you can silent expend database-native ACID transactions for the updates across table shards. Once cross-database updates are involved, the problem of distributed transactions becomes extremely complicated.

    Database sharding and table sharding ensure that the data volume of each table is always below the threshold. vertical sharding usually requires adjustments to the architecture and design, and for this reason, fails to preserve up with the rapidly changing business requirements on the Internet. Therefore, it cannot effectively remove the single-point bottleneck. Horizontal sharding theoretically removes the bottleneck in the data processing of a lone host and supports elastic scaling, making it the benchmark sharding solution.

    Database sharding and read/write separation are the two common measures for weighty access traffic. Although table sharding can resolve the performance problems caused by massive data, it cannot resolve the problem of slack responsiveness caused by extravagant requests to the very database. For this reason, database sharding is often implemented in horizontal sharding to maneuver the huge data volume and weighty access traffic. Read/write separation is another artery to dispense traffic. However, you must reckon the latency between data reading and data writing when designing the architecture.

    Although database sharding can resolve these problems, the distributed architecture introduces unusual problems. Because the data is widely dispersed after database sharding or table sharding, application progress and O&M personnel fill to visage extremely weighty workloads when performing operations on the database. For example, they requisite to know the specific table shard and the home database for each kindhearted of data.

    NewSQL with a brand unusual architecture resolves this problem in a artery that is different from that of the sharding middleware:

  • In NewSQL with the unusual architecture, the database storage engine is redesigned to store the data from the very table in a distributed file system.
  • In the sharding middleware, the impacts of sharding are transparent to users, allowing them to expend a horizontally sharded database as a common database.
  • Cross-database transactions present a astronomical challenge to distributed databases. With commandeer table sharding, you can reduce the amount of data stored in each table and expend local transactions whenever possible. Proper expend of different tables in the very database can effectively encourage to avoid the problem caused by distributed transactions. However, in scenarios where cross-database transactions are inevitable, some businesses silent require the transactions to subsist consistent. On the other hand, Internet companies turned their back on XA-based distributed transactions due to their needy performance. Instead, most of these companies expend soft transactions that ensure eventual consistency.

    Read/Write Separation

    Database throughput is challenged by a huge bottleneck due to increasing system access traffic. For applications with a big number of concurrent reads and few writes, you can split a lone database into primary and secondary databases. The primary database is used for the addition, deletion, and modification of transactions, while the secondary database is for queries. This effectively prevents the row locking problem caused by data updates and dramatically improves the query performance of the entire system.

    If you configure one primary database and multiple secondary databases, query requests can subsist evenly distributed to multiple data copies, further enhancing the system's processing capability.

    If you configure multiple primary databases and multiple secondary databases, both the throughput and availability of the system can subsist improved. In this configuration, the system silent can rush normally when one of these databases is down or a disk is physically damaged.

    Read/write separation is essentially a nature of sharding. In horizontal sharding, data is dispersed to different data nodes. In read/write separation, however, read and write requests are respectively routed to the primary and secondary databases based on the results of SQL syntax analysis. Noticeably, data on different data nodes are consistent in read/write separation but are different in horizontal sharding. By using horizontal sharding in conjunction with read/write separation, you can further help system performance, but system maintenance becomes complicated.

    Although read/write separation can help the throughput and availability of the system, it besides results in data inconsistency, both between multiple primary databases and between the primary and secondary databases. Moreover, similar to sharding, read/write separation besides increases database O&M complexity for the application progress and O&M personnel.

    As the key profit of read/write separation, the impacts of read/write separation are transparent to users, allowing them to expend the primary and secondary databases as common databases.

    Key Processes

    Sharding consists of the following processes: statement parsing, statement routing, statement modification, statement execution, and result aggregation. Database protocol adaptation is essential to ensure low-cost access by original applications.

    Protocol Adaptation

    In addition to SQL, NewSQL is compatible with the protocols for traditional relational databases, reducing access costs for users. Open-source relational database products act as native relational databases by implementing the NewSQL protocol.

    Due to the popularity of MySQL and PostgreSQL, many NewSQL databases implement the transport protocols for MySQL and PostgreSQL, allowing MySQL and PostgreSQL users to access NewSQL products without modifying their business codes.

    MySQL Protocol

    Currently, MySQL is the most celebrated open source database product. To learn about its protocol, you can start with the basic data types, protocol packet structures, connection phase, and command phase of MySQL.

    Basic Data Types:

    A MySQL packet consists of the following basic data types defined by MySQL:

    3Basic MySQL data types

    When binary data needs to subsist converted to the data that can subsist understood by MySQL, the MySQL packet is read based on the number of digits pre-defined by the data nature and converted to the corresponding number or string. In turn, MySQL writes each domain to the packet according to the length specified in the protocol.

    Structure of a MySQL Packet

    The MySQL protocol consists of one or more MySQL packets. Regardless of the type, a MySQL packet consists of the payload length, sequence ID, and payload.

  • The payload length is of the int<3> type. It indicates the total number of bytes occupied by the subsequent payload. Note that the payload length does not comprehend the length of the sequence ID.
  • The sequence ID is of the int<1> type. It indicates the serial number of each MySQL packet returned for a request. The maximum sequence ID that occupies one byte is 0xff, that is, 255 in decimal notation. However, this does not imply that a request can only accommodate up to 255 MySQL packets. If the sequence ID exceeds 255, the sequence ID restarts from zero. For example, hundreds of thousands of records may subsist returned for a request. In this case, the MySQL packets only requisite to ensure that their sequence IDs are continuous. If the sequence ID exceeds 255, it is reset and restarts from zero.
  • The length of the payload is the bytes declared by the payload length. In a MySQL packet, the payload is the actual business data. The content of the payload varies with the packet type.
  • Connection Phase

    In the connection phase, a communication channel is established between the MySQL client and server. Then, three tasks are completed in this phase: exchanging the capabilities of the MySQL client and server (Capability Negotiation), setting up an SSL communication channel, and authenticating the client against the server. The following design shows the connection setup rush from the MySQL server perspective:

    4Flowchart of the MySQL connection phase

    The design excludes the interaction between the MySQL server and client. In fact, MySQL connection is initiated by the client. When the MySQL server receives a connection request from the client, it exchanges the capabilities of the server and client, generates the initial handshake packet in different formats based on the negotiation result, and writes the packet to the client. The packet contains the connection ID, server's capabilities, and ciphertext generated for authorization.

    After receiving the handshake packet from the server, the MySQL client sends a handshake packet response. This packet contains the user appellation and encrypted password for accessing the database.

    After receiving the handshake response, the MySQL server verifies the authentication information and returns the verification result to the client.

    Command Phase

    The command phase comes after the successful connection phase. In this phase, commands are executed. MySQL has a total of 32 command packets, whose specific types are listed below:

    5MySQL command packets

    MySQL command packets are classified into four types: text protocol, binary protocol, stored procedure, and replication protocol.

    The first bit of the payload is used to identify the command type. The functions of packets are indicated by their names. The following describes some considerable MySQL command packets:


    COM_QUERY is an considerable command that MySQL uses for queries in modest text format. It corresponds to java.sql.Statement in JDBC. COM_QUERY itself is simple and consists of an ID and SQL statement:

    1 [03] COM_QUERY

    string[EOF] is the query the server will execute

    The COM_QUERY response packet is complex, as shown below:

    6MySQL COM_QUERY flowchart

    Depending on the scenario, four types of COM_QUERY responses may subsist returned. These are query result, update result, file execution result, and error.

    If an error, such as network disconnection or incorrect SQL syntax, occurs during execution, the MySQL protocol sets the first bit of the packet to 0xff and encapsulates the mistake message into the ErrPacket and returns it.

    Given that it is rare that files are used to execute COM_QUERY, this case is not elaborated here.

    For an update request, the MySQL protocol sets the first bit of the packet to 0x00 and returns an OkPacket. The OkPacket must accommodate the number of row records affected by this update operation and the ultimate inserted ID.

    Query requests are most complex. For such requests, an independent FIELD_COUNT packet must subsist created based on the number of result set fields that the client obtains by reading int. Then, independent COLUMN_DEFINITION packets are sequentially generated based on the details of each column of the returned field. The metadata information of the query domain ends with an EofPacket. Later, Text Protocol Resultset Rows of the packet will subsist generated row by row and subsist converted to the string format regardless of the data type. Finally, the packet silent ends with an EofPacket.

    The java.sql.PreparedStatement operation in JDBC consists of the following five MySQL binary protocol packets: COM_STMT_PREPARE, COM_STMT_EXECUTE, COM_STMT_ CLOSE, COM_STMT_RESET, and COM_ STMT_SEND_LONG_DATA. Among these packets, COM_STMT_PREPARE and COM_STMT_ EXECUTE are most important. They correspond to connection.prepareStatement() and connection.execute()&connection.executeQuery()&connection.executeUpdate() in JDBC, respectively.


    COM_STMT_PREPARE is similar to COM_QUERY, both of which consist of the command ID and the specific SQL statement:


    string[EOF] the query to prepare

    The returned value of COM_STMT_PREPARE is not a query result but a response packet that consists of the statement_id, the number of columns, and the number of parameters. Statement_id is the unique identifier that MySQL assigns to an SQL statement after the pre-compilation is completed. Based on the statement_id, you can retrieve the corresponding SQL statement from MySQL.

    For an SQL statement registered by the COM_STMT_PREPARE command, only the statement_id (rather than the SQL statement itself) needs to subsist sent to the COM_STMT_EXECUTE command, eliminating the unnecessary consumption of the network bandwidth.

    Moreover, MySQL can pre-compile the SQL statements passed in by COM_STMT_PREPARE into the abstract syntax tree for reuse, improving SQL execution efficiency. If COM_QUERY is used to execute the SQL statements, you must re-compile each of these statements. For this reason, PreparedStatement is more efficient than Statement.


    COM_STMT_EXECUTE consists of the statement-id and the parameters for the SQL. It uses a data structure named NULL-bitmap to identify the null values of these parameters.

    The response packet of the COM_STMT_EXECUTE command is similar to that of the COM_QUERY command. For both response packets, the domain metadata and query result set are returned and separated by the EofPacket.

    Their differences equivocate in that Text Protocol Resultset Row is replaced with Binary Protocol Resultset Row in the COM_STMT_EXECUTE response packet. Based on the nature of the returned data, the format of the returned data is converted to the corresponding MySQL basic data type, further reducing the required network transfer bandwidth.

    Other Protocols

    In addition to MySQL, PostgreSQL, and SQL Server are besides open-source protocols and can subsist implemented in the very way. In contrast, another frequently used database protocol, Oracle, is not open source and cannot subsist implemented in the very way.

    SQL Parsing

    Although SQL is relatively simple compared to other programming languages, it is silent a complete programming language. Therefore, it essentially works in the very artery as other languages in terms of parsing SQL grammar and parsing other languages (such as Java, C, and Go).

    The parsing process is divided into lexical parsing and syntactic parsing. First, the lexical parser splits the SQL statement into words that cannot subsist further divided. Then, the syntactic parser converts the SQL statement to an abstract syntax tree. Finally, the abstract syntax tree is accessed to extract the parsing context.

    The parsing context includes tables, Select items, Order By items, Group By items, aggregate functions, pagination information, and query conditions. For a NewSQL statement of the sharding middleware type, the placeholders that may subsist changed are besides included.

    By using the following SQL statement as an example: select username, ismale from userinfo where age > 20 and level > 5 and 1 = 1, the post-parsing abstract syntax tree is as follows:

    7Abstract syntax tree

    Many third-party tools can subsist used to generate abstract syntax trees, among which ANTLR is a splendid choice. ANTLR generates Java code for the abstract syntax tree based on the rules defined by developers and provides a visitor interface. Compared with code generation, the manually developed abstract syntax tree is more efficient in execution but the workload is relatively high. In scenarios where performance requirements are demanding, you can reckon customizing the abstract syntax tree.

    Request Routing

    The sharding strategy is to match databases and tables according to the parsing context and generate the routing path. SQL routing with sharding keys can subsist divided into single-shard routing (where the equal heed is used as the sharding operator), multi-shard routing (where IN is used as the sharding operator), and orbit routing (where BETWEEN is used as the sharding operator). SQL statements without sharding keys adopt broadcast routing.

    Normally, sharding policies can subsist incorporated by the database or subsist configured by users. Sharding policies incorporated in the database are relatively simple and can generally subsist divided into mantissa modulo, hash, range, tag, time, and so on. More flexible, sharding policies set by users can subsist customized according to their needs.

    SQL Statement Rewriting

    NewSQL with the unusual architecture does not require SQL statement rewriting, which is only required for NewSQL statements of the sharding middleware type. SQL statement rewriting is used to rewrite SQL statements into ones that can subsist correctly executed in actual databases. This includes replacing the logical table appellation with the actual table name, rewriting the start and cease values of the pagination information, adding the columns that are used for sorting, grouping, and auto-increment keys, and rewriting AVG as SUM or COUNT.

    Results Merging

    Results merging refers to merging multiple execution result sets into one result set and returning it to the application. Results merging is divided into stream merging and remembrance merging.

  • Stream merging is used for simple queries, Order By queries, Group By queries, and Order By and Group By scenarios where the Order By and Group By items are completely consistent. The "next" fashion is called each time to traverse the stream merging result set without consuming additional remembrance resources.
  • Memory merging requires that shameful data in the result sets must subsist loaded to the remembrance for processing. If the result sets accommodate a big volume of data, lots of remembrance resources are consumed accordingly.
  • In piece 2 of this article, they will discuss in further detail about distributed transactions and database governance.

    Parkland Fuel Corporation Selects Metegrity’s Visions Enterprise Inspection Data Management Software (IDMS) | actual questions and Pass4sure dumps

    ← Press Releases

    Parkland Fuel Corporation, one of North America’s fastest growing fuel retailers, has selected the Visions software as the Asset Integrity Management (AIM) system for their refinery in Burnaby, BC. Parkland Fuel recently acquired the Burnaby refinery from Chevron, who had been using Meridium as their point software on site.

    They recognized that the existing software was insufficient for their needs. They required an point software that offered a user-friendly interface, a wealthy variety of features, more affordable cost, easily retrievable data, robust custom regulatory reporting, and capability to interface with other products through API connectors to back their travail flow. They vetted multiple products, ultimately determining that Visions best matched their needs. Having worked with Metegrity in the past and consistently satisfied with the Visions product, Parkland recognized it as the optimum election and began the process to switch.

    Metegrity performed an implementation study on the refinery in early March 2018, and by May of that very year the conversion had already begun. Visions V5 went live at the rise of October 2018. It now supports over 9,700 assets for Parkland Fuel in Burnaby.

    “We are supercilious that Parkland’s inspection team recognizes the value of their software and had the opportunity to compare it to other IDMS software tools. These opportunities clearly demonstrate their superior solution in the market,” says Dave Maguire, Senior Advisor - Asset Integrity with Metegrity. “It is a mighty testament to the attribute of their product and the trustworthy service they present when a client seeks you out based on assurance from past experience.”

    About Metegrity

    Metegrity is a globally trusted provider of comprehensive attribute & asset integrity management software solutions. Praised for unparalleled precipitate of deployment, their products are besides highly configurable – allowing their experts to strategically tailor them to your business practices. With more than 20 years in the industry, they proudly service top tier global organizations in the Oil & Gas, Pipeline & Chemical industries. For more information, visit

    Black Lab Software Announces Linux-Based Mac Mini Competitor Black Lab BriQ v5 | actual questions and Pass4sure dumps

    We fill been informed by Black Lab Software, the creators of the Ubuntu-based Black Lab Linux operating system about the universal availability of their unusual class of hardware, the Black Lab BriQ version 5.

    The 5th version of the Black Lab BriQ computer comes with many unusual features, among which they can mention the re-implementation of VGA for shameful editions, HDMI support, air cooling back for reduced power usage, as well as back for adding either a 2.5" SATA drive or an SDD disk. These will rescue energy up to 38% and 64%, respectively.

    "The 5th incarnation of the Black Lab BriQ offers unique features and enhancements which disinguish it from its predecessors," says Robert Dohnert. "First, VGA has been reintroduced on shameful models; HDMI is silent included. The BriQ is totally air‐cooled which reduces power usage - energy savings are over 64% with the SSD drive option and 38% with a traditional laptop SATA difficult drive."

    Another spirited aspect of the unusual Black Lab BriQ version 5 computer is that it's over 20% slimmer than previous versions. According to Mr. Dohnert, Black Lab BriQ v5 is the most environmentally friendly system on the planet, as the motherboard is 98% carcinogen-free, and the entire chassis is now made from recycled aluminum, which, in turn, is besides recyclable.

    Black Lab BriQ v5 has the very specs as Apple Mac Mini

    The unusual Black Lab BriQ v5 hardware is available today in two different configurations, one with 4GB RAM, 64GB SDD drive, and an Intel i3 CPU running at 1.7GHz, and the other one with 4GB RAM, 500GB HDD, and the very Intel i3 processor running at 1.7GHz. The SDD version will cost you $515.00 (€480), and the HDD model is priced at only $450.00 (€420).

    Black Lab Software claims that the specs of Black Lab BriQ v5 are equal to the ones of Apple's Mac Mini computer, but if you buy Black Lab BriQ, you'll rescue over $300.00 (€280). But wait, there's more, as Black Lab Software besides offers a Pro version of Black Lab BriQ v5, which comes with Intel i5 CPUs, up to 16GB RAM, and 256GB SDD or 1TB HDD.

    Black Lab BriQ Pro models cost $775.00 (€730) if you vanish for the SDD version, and $995.00 (€930) if you elect the HDD edition. Also, both Pro models of Black Lab BriQ version 5 arrive with a 3-year extended warranty. You can purchase a Black Lab BriQ v5 computer right now from the official webstore of Black Lab Software.

    Black Lab BriQ v5 back view

    Black Lab BriQ v5 back view

    Direct Download of over 5500 Certification Exams

    3COM [8 Certification Exam(s) ]
    AccessData [1 Certification Exam(s) ]
    ACFE [1 Certification Exam(s) ]
    ACI [3 Certification Exam(s) ]
    Acme-Packet [1 Certification Exam(s) ]
    ACSM [4 Certification Exam(s) ]
    ACT [1 Certification Exam(s) ]
    Admission-Tests [13 Certification Exam(s) ]
    ADOBE [93 Certification Exam(s) ]
    AFP [1 Certification Exam(s) ]
    AICPA [2 Certification Exam(s) ]
    AIIM [1 Certification Exam(s) ]
    Alcatel-Lucent [13 Certification Exam(s) ]
    Alfresco [1 Certification Exam(s) ]
    Altiris [3 Certification Exam(s) ]
    Amazon [2 Certification Exam(s) ]
    American-College [2 Certification Exam(s) ]
    Android [4 Certification Exam(s) ]
    APA [1 Certification Exam(s) ]
    APC [2 Certification Exam(s) ]
    APICS [2 Certification Exam(s) ]
    Apple [69 Certification Exam(s) ]
    AppSense [1 Certification Exam(s) ]
    APTUSC [1 Certification Exam(s) ]
    Arizona-Education [1 Certification Exam(s) ]
    ARM [1 Certification Exam(s) ]
    Aruba [6 Certification Exam(s) ]
    ASIS [2 Certification Exam(s) ]
    ASQ [3 Certification Exam(s) ]
    ASTQB [8 Certification Exam(s) ]
    Autodesk [2 Certification Exam(s) ]
    Avaya [96 Certification Exam(s) ]
    AXELOS [1 Certification Exam(s) ]
    Axis [1 Certification Exam(s) ]
    Banking [1 Certification Exam(s) ]
    BEA [5 Certification Exam(s) ]
    BICSI [2 Certification Exam(s) ]
    BlackBerry [17 Certification Exam(s) ]
    BlueCoat [2 Certification Exam(s) ]
    Brocade [4 Certification Exam(s) ]
    Business-Objects [11 Certification Exam(s) ]
    Business-Tests [4 Certification Exam(s) ]
    CA-Technologies [21 Certification Exam(s) ]
    Certification-Board [10 Certification Exam(s) ]
    Certiport [3 Certification Exam(s) ]
    CheckPoint [41 Certification Exam(s) ]
    CIDQ [1 Certification Exam(s) ]
    CIPS [4 Certification Exam(s) ]
    Cisco [318 Certification Exam(s) ]
    Citrix [48 Certification Exam(s) ]
    CIW [18 Certification Exam(s) ]
    Cloudera [10 Certification Exam(s) ]
    Cognos [19 Certification Exam(s) ]
    College-Board [2 Certification Exam(s) ]
    CompTIA [76 Certification Exam(s) ]
    ComputerAssociates [6 Certification Exam(s) ]
    Consultant [2 Certification Exam(s) ]
    Counselor [4 Certification Exam(s) ]
    CPP-Institue [2 Certification Exam(s) ]
    CPP-Institute [1 Certification Exam(s) ]
    CSP [1 Certification Exam(s) ]
    CWNA [1 Certification Exam(s) ]
    CWNP [13 Certification Exam(s) ]
    Dassault [2 Certification Exam(s) ]
    DELL [9 Certification Exam(s) ]
    DMI [1 Certification Exam(s) ]
    DRI [1 Certification Exam(s) ]
    ECCouncil [21 Certification Exam(s) ]
    ECDL [1 Certification Exam(s) ]
    EMC [129 Certification Exam(s) ]
    Enterasys [13 Certification Exam(s) ]
    Ericsson [5 Certification Exam(s) ]
    ESPA [1 Certification Exam(s) ]
    Esri [2 Certification Exam(s) ]
    ExamExpress [15 Certification Exam(s) ]
    Exin [40 Certification Exam(s) ]
    ExtremeNetworks [3 Certification Exam(s) ]
    F5-Networks [20 Certification Exam(s) ]
    FCTC [2 Certification Exam(s) ]
    Filemaker [9 Certification Exam(s) ]
    Financial [36 Certification Exam(s) ]
    Food [4 Certification Exam(s) ]
    Fortinet [13 Certification Exam(s) ]
    Foundry [6 Certification Exam(s) ]
    FSMTB [1 Certification Exam(s) ]
    Fujitsu [2 Certification Exam(s) ]
    GAQM [9 Certification Exam(s) ]
    Genesys [4 Certification Exam(s) ]
    GIAC [15 Certification Exam(s) ]
    Google [4 Certification Exam(s) ]
    GuidanceSoftware [2 Certification Exam(s) ]
    H3C [1 Certification Exam(s) ]
    HDI [9 Certification Exam(s) ]
    Healthcare [3 Certification Exam(s) ]
    HIPAA [2 Certification Exam(s) ]
    Hitachi [30 Certification Exam(s) ]
    Hortonworks [4 Certification Exam(s) ]
    Hospitality [2 Certification Exam(s) ]
    HP [750 Certification Exam(s) ]
    HR [4 Certification Exam(s) ]
    HRCI [1 Certification Exam(s) ]
    Huawei [21 Certification Exam(s) ]
    Hyperion [10 Certification Exam(s) ]
    IAAP [1 Certification Exam(s) ]
    IAHCSMM [1 Certification Exam(s) ]
    IBM [1532 Certification Exam(s) ]
    IBQH [1 Certification Exam(s) ]
    ICAI [1 Certification Exam(s) ]
    ICDL [6 Certification Exam(s) ]
    IEEE [1 Certification Exam(s) ]
    IELTS [1 Certification Exam(s) ]
    IFPUG [1 Certification Exam(s) ]
    IIA [3 Certification Exam(s) ]
    IIBA [2 Certification Exam(s) ]
    IISFA [1 Certification Exam(s) ]
    Intel [2 Certification Exam(s) ]
    IQN [1 Certification Exam(s) ]
    IRS [1 Certification Exam(s) ]
    ISA [1 Certification Exam(s) ]
    ISACA [4 Certification Exam(s) ]
    ISC2 [6 Certification Exam(s) ]
    ISEB [24 Certification Exam(s) ]
    Isilon [4 Certification Exam(s) ]
    ISM [6 Certification Exam(s) ]
    iSQI [7 Certification Exam(s) ]
    ITEC [1 Certification Exam(s) ]
    Juniper [64 Certification Exam(s) ]
    LEED [1 Certification Exam(s) ]
    Legato [5 Certification Exam(s) ]
    Liferay [1 Certification Exam(s) ]
    Logical-Operations [1 Certification Exam(s) ]
    Lotus [66 Certification Exam(s) ]
    LPI [24 Certification Exam(s) ]
    LSI [3 Certification Exam(s) ]
    Magento [3 Certification Exam(s) ]
    Maintenance [2 Certification Exam(s) ]
    McAfee [8 Certification Exam(s) ]
    McData [3 Certification Exam(s) ]
    Medical [69 Certification Exam(s) ]
    Microsoft [374 Certification Exam(s) ]
    Mile2 [3 Certification Exam(s) ]
    Military [1 Certification Exam(s) ]
    Misc [1 Certification Exam(s) ]
    Motorola [7 Certification Exam(s) ]
    mySQL [4 Certification Exam(s) ]
    NBSTSA [1 Certification Exam(s) ]
    NCEES [2 Certification Exam(s) ]
    NCIDQ [1 Certification Exam(s) ]
    NCLEX [2 Certification Exam(s) ]
    Network-General [12 Certification Exam(s) ]
    NetworkAppliance [39 Certification Exam(s) ]
    NI [1 Certification Exam(s) ]
    NIELIT [1 Certification Exam(s) ]
    Nokia [6 Certification Exam(s) ]
    Nortel [130 Certification Exam(s) ]
    Novell [37 Certification Exam(s) ]
    OMG [10 Certification Exam(s) ]
    Oracle [279 Certification Exam(s) ]
    P&C [2 Certification Exam(s) ]
    Palo-Alto [4 Certification Exam(s) ]
    PARCC [1 Certification Exam(s) ]
    PayPal [1 Certification Exam(s) ]
    Pegasystems [12 Certification Exam(s) ]
    PEOPLECERT [4 Certification Exam(s) ]
    PMI [15 Certification Exam(s) ]
    Polycom [2 Certification Exam(s) ]
    PostgreSQL-CE [1 Certification Exam(s) ]
    Prince2 [6 Certification Exam(s) ]
    PRMIA [1 Certification Exam(s) ]
    PsychCorp [1 Certification Exam(s) ]
    PTCB [2 Certification Exam(s) ]
    QAI [1 Certification Exam(s) ]
    QlikView [1 Certification Exam(s) ]
    Quality-Assurance [7 Certification Exam(s) ]
    RACC [1 Certification Exam(s) ]
    Real-Estate [1 Certification Exam(s) ]
    RedHat [8 Certification Exam(s) ]
    RES [5 Certification Exam(s) ]
    Riverbed [8 Certification Exam(s) ]
    RSA [15 Certification Exam(s) ]
    Sair [8 Certification Exam(s) ]
    Salesforce [5 Certification Exam(s) ]
    SANS [1 Certification Exam(s) ]
    SAP [98 Certification Exam(s) ]
    SASInstitute [15 Certification Exam(s) ]
    SAT [1 Certification Exam(s) ]
    SCO [10 Certification Exam(s) ]
    SCP [6 Certification Exam(s) ]
    SDI [3 Certification Exam(s) ]
    See-Beyond [1 Certification Exam(s) ]
    Siemens [1 Certification Exam(s) ]
    Snia [7 Certification Exam(s) ]
    SOA [15 Certification Exam(s) ]
    Social-Work-Board [4 Certification Exam(s) ]
    SpringSource [1 Certification Exam(s) ]
    SUN [63 Certification Exam(s) ]
    SUSE [1 Certification Exam(s) ]
    Sybase [17 Certification Exam(s) ]
    Symantec [134 Certification Exam(s) ]
    Teacher-Certification [4 Certification Exam(s) ]
    The-Open-Group [8 Certification Exam(s) ]
    TIA [3 Certification Exam(s) ]
    Tibco [18 Certification Exam(s) ]
    Trainers [3 Certification Exam(s) ]
    Trend [1 Certification Exam(s) ]
    TruSecure [1 Certification Exam(s) ]
    USMLE [1 Certification Exam(s) ]
    VCE [6 Certification Exam(s) ]
    Veeam [2 Certification Exam(s) ]
    Veritas [33 Certification Exam(s) ]
    Vmware [58 Certification Exam(s) ]
    Wonderlic [2 Certification Exam(s) ]
    Worldatwork [2 Certification Exam(s) ]
    XML-Master [3 Certification Exam(s) ]
    Zend [6 Certification Exam(s) ]

    References :

    Dropmark :
    Dropmark-Text :
    Blogspot :
    Wordpress : :

    Back to Main Page | |