1Z0-058 | A Review Of High value 1Z0-058 vce


Q41. You plan to use Enterprise Manager to locate and stage patches to your Oracle Home. 

The software library has been configured to be downloaded to /u01/app/oracle and your "My Oracle Support" credentials have been entered. 

You want to start the provisioning daemon in order to use the deployment procedure manager to view, edit, monitor, and run deployment procedures. 

How would you start the provisioning daemon? 

A. using pafctl start 

B. using crsctl start paf 

C. using srvctl start paf 

D. using emctl start paf 

Answer:

Explanation: 

Starting the Provisioning Daemon The provisioning daemon is started with: $ pafctl start Enter repository user password : Enter interval [default 3]: Provisioning Daemon is Up, Interval = 3 D60488GC11 Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 4 – 26 

Q42. Which four statements are true about ADVM interoperability? 

A. Using fdisk or similar disk utilities to partition ADVM-managed volumes is not supported 

B. On Linux platforms, the raw utility can be used to map ADVM volume block devices to raw volume devices. 

C. The creation of multipath devices over ADVM devices is not supported. 

D. You may create ASMLIB devices over ADVM devices to simplify volume management. 

E. ADVM does not support ASM storage contained in Exadata. 

F. F. ADVM volumes cannot be used as a boot device or a root file system. 

Answer: A,C,E,F 

Explanation: Oracle Automatic Storage Management Cluster File System (Oracle ACFS) and Oracle ASM Dynamic Volume Manager (Oracle ADVM) extend Oracle ASM support to include database and application executables, database trace files, database alert logs, application reports, BFILEs, and configuration files. Other supported files are video, audio, text, images, engineering drawings, and other general-purpose application file data. Because of the fact that Oracle ADVM Volumes are technically spoken ASM files located on ASM Disk groups, and the fact that the Dynamic Volumes do not use the traditional device partitioning, it enables Oracle to extend some of the ASM features to the ASM Clustered File Systems, which are created inside these ADVM Volumes, such as dynamic resizing or dynamically adding volumes. This makes ADVM and ACFS a far more flexible solution than traditional physical devices. 

Important Notes: 

Partitioning of dynamic volumes (using fdisk or similar) is not supported Do not use raw to map ADVM volume block devices into raw volumes devices Do not create multipath devices over ADVM devices Do not create ASMLIB devices over ADVM devices Oracle ADVM supports all storage solutions supported for Oracle ASM with the exception of NFS and Exadata storage ADVM volumes cannot be used as a boot device or a root file system 

Q43. A Grid Plug and Play (GPnP) software image must contain at least three items. Which three items are required? 

A. operating system software 

B. Oracle Database software 

C. GPnP software 

D. security certificate of the provisioning authority 

E. application software 

Answer: A,C,D 

Explanation: 

GPnP Components 

Software image 

A software image is a read-only collection of software to berun on nodes of the same type. 

At a minimum, the image must contain: 

-An operating system 

-The GPnP software 

-A security certificate from the provisioning authority 

-Other software required to configure the node when it starts up 

D60488GC11 Oracle 11g: RAC and Grid Infrastructure Administration 

Q44. As part of your data center's high availability strategy, you are creating resource definitions to control the management of a web-based application by the Oracle Grid Infrastructure clusterware stack. 

The application and its VIP are normally online on one node of a four-node cluster due to the CARDINALITY of the resource type being set to 1. 

You have chosen a policy-managed resource type for the application by using a server pool that uses only RACNODE3 and RACNODE4. The START ATTEMPTS attribute for the resource is set to 2 and FAILURE 

INTERVAL is set to 60. 

What is true about the attributes that may be set to control the application? 

A. The clusterware will attempt to start the application on the same node twice within the server pool as long as that node is up. If the node fails, then the VIP and the application will be failed over to the other node in the server pool immediately. 

B. The clusterware will attempt to start the application on the same node twice within the server pool as long as that node is up. If the node fails, then the VIP and the application will be failed over to the other node in the server pool only after two 60-second intervals have elapsed. 

C. The clusterware will attempt to start the application on the same node twice within the server pool as long as that node is up. If the application fails to start after 60 seconds, but the node is still up, then the VIP and the application will NOT be failed over to the other node in the server pool. 

D. The clusterware will attempt to start the application on the same node twice within the server pool as long as that node is up. If the application fails to start immediately but the node is still up, then the VIP and the application will NOT be failed over to the other node in the server pool. 

Answer:

Explanation: CARDINALITY 

The number of servers on which a resource can run, simultaneously. This is the upper limit for resource cardinality. 

RESTART_ATTEMPTS 

The number of times that Oracle Clusterware attempts to restart a resource on the resource's current server before attempting to relocate it. A value of 1 indicates that Oracle Clusterware only attempts to restart the resource once on a server. A second failure causes Oracle Clusterware to attempt to relocate the resource. A value of 0 indicates that there is no attempt to restart but Oracle Clusterware always tries to fail the resource over to another server. 

FAILURE_INTERVAL 

The interval, in seconds, before which Oracle Clusterware stops a resource if the resource has exceeded the number of failures specified by the FAILURE_THRESHOLD attribute. If the value is zero (0), then tracking of failures is disabled. 

FAILURE_THRESHOLD 

The number of failures of a resource detected within a specified FAILURE_INTERVAL for the resource before Oracle Clusterware marks the resource as unavailable and no longer monitors it. If a resource fails the specified number of times, then Oracle Clusterware stops the resource. If the value is zero (0), then tracking of failures is disabled. The maximum value is 20. 

Oracle. Clusterware Administration and Deployment Guide 11g Release 2 (11.2) 

Q45. Which two statements are true about ACFS snapshots? 

A. They can be created for ACFS file systems only if the ASM disk group hosting the ADVM volume file used by the file system has free space available. 

B. They can be created for ACFS file systems only if the ADVM volume file used by the file system has free space available. 

C. They can be created only if the ASM disk group hosting the ADVM volume used by the file system has no other ASM files contained in the disk group. 

D. They can be created when ACFS is used both on clusters and on stand-alone servers. 

E. They are accessible only on the cluster node that was used when creating the snapshot. 

Answer: B,D Explanation: 

About Oracle ACFS Snapshots Oracle ACFS snapshot storage is maintained within the file system, eliminating the management of separate storage pools for file systems and snapshots. Oracle ACFS file systems can be dynamically resized to accommodate additional file and snapshot storage requirements. Oracle. Automatic Storage Management Administrator's Guide 11g Release 2 (11.2) 

Q46. Examine the following output: 

[oracIe@gr5153 ~]$ sudo crsctl config crs CRS-4622: Oracle High Availability Services autostart is enabled. [oracIe@gr5153 ~]$ srvctl config database -d RACDB -a Database unique name: RACDB Database name: RACDB Oracle home : /u01/app/oracle/product/l11.2.0/dbhome_1 Oracle user: oracle Spfile: +DATA/ RACDB /spfileRACDB.ora Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: POOL1 Database instances: Disk Groups: DATA, FRA Services: Database is enabled Database is policy managed 

Oracle Clusterware is started automatically after the system boot. Which two statements are true regarding the attributes of RACDB? 

A. Oracle Clusterware automatically starts RACDB. 

B. You must manually start RACDB. 

C. Database resource is managed by crsd for high availability and may be automatically restarted in place if it fails. 

D. Database resource Is not managed by crsd for high availability and needs to be restarted manually if it fails. 

Answer: A,C 

Explanation: 

Switch Between the Automatic and Manual Policies By default, Oracle Clusterware is configured to start the VIP, listener, instance, ASM, database services, and other resources during system boot. It is possible to modify some resources to have their profile parameter AUTO_START set to the value 2. This means that after node reboot, or when Oracle Clusterware is started, resources with AUTO_START=2 need to be started manually via srvctl. This is designed to assist in troubleshooting and system maintenance. When changing resource profiles through srvctl, the command tool automatically modifies the profile attributes of other dependent resources given the current prebuilt dependencies. The command to accomplish this is: srvctl modify database -d <dbname> -y AUTOMATIC|MANUAL 

D60488GC11 Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 15 – 3 

3.4.1 Benefits of Using Oracle Clusterware 

Oracle Clusterware provides the following benefits: 

Tolerates and quickly recovers from computer and instance failures. 

Simplifies management and support by means of using Oracle Clusterware together with 

Oracle Database. 

By using fewer vendors and an all Oracle stack you gain better integration compared to using third-party clusterware. 

Performs rolling upgrades for system and hardware changes. For example, you can apply 

Oracle 

Clusterware upgrades, patch sets, and interim patches in a rolling fashion, as follows: 

Upgrade Oracle Clusterware from Oracle Database 10g to Oracle Database 11g 

Upgrade Oracle Clusterware from Oracle Database release 11.1 to release 11.2 

Patch Oracle Clusterware from Oracle Database 11.1.0.6 to 11.1.0.7 

Patch Oracle Clusterware from Oracle Database 10.2.0.2 Bundle 1 to Oracle Database 

10.2.0.2 Bundle 2 

Automatically restarts failed Oracle processes. 

Automatically manages the virtual IP (VIP) address so when a node fails then the node's 

VIP address fails over to another node on which the VIP address can accept connections. 

Automatically restarts resources from failed nodes on surviving nodes. 

Controls Oracle processes as follows: 

For Oracle RAC databases, Oracle Clusterware controls all Oracle processes by default. 

For Oracle single-instance databases, Oracle Clusterware allows you to configure the 

Oracle processes into a resource group that is under the control of Oracle Clusterware. 

Provides an application programming interface (API) for Oracle and non-Oracle applications that enables you to control other Oracle processes with Oracle Clusterware, such as restart or react to failures and certain rules. 

Manages node membership and prevents split-brain syndrome in which two or more instances attempt to control the database. 

Provides the ability to perform rolling release upgrades of Oracle Clusterware, with no downtime for applications. 

Oracle. Database High Availability Overview 11g Release 2 (11.2) 

Q47. Which three programs or utilities can be used to convert a single-instance database to a Real Application Cluster database? 

A. DBCA 

B. the SRVCTL utility 

C. Enterprise Manager 

D. the RCONFIG utility 

E. the CRSCTLl utility 

Answer: A,C,D 

Explanation: 

Single Instance to RAC Conversion 

Single-instance databases can be converted to RAC using: 

– DBCA 

– Enterprise Manager 

– RCONFIG utility 

D60488GC11 Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 

Q48. Examine the following details for a six-Instance RAC database: 

What is the most prominent problem indicated by the above output? 

A. high input/output (I/O) delays 

B. CPU saturation and memory depletion 

C. large number of untuned queries from one of the nodes 

D. misconfigured or faulty interconnect 

Answer:

Explanation: In Oracle RAC environments, RDBMS gathers global cache work load statistics which are reported in STATSPACK, AWRs and GRID CONTROL. Global cache lost blocks statistics ("gc cr block lost" and/or "gc current block lost") for each node in the cluster as well as aggregate statistics for the cluster represent a problem or inefficiencies in packet processing for the interconnect traffic. These statistics should be monitored and evaluated regularly to guarantee efficient interconnect Global Cache and Enqueue Service (GCS/GES) and cluster processing. Any block loss indicates a problem in network packet processing and should be investigated. 

Symptoms: 

Primary: 

."gc cr block lost" / "gc current block lost" in top 5 or significant wait event 

Secondary: 

SQL traces report multiple gc cr requests / gc current request / gc cr multiblock requests with long and uniform elapsed times Poor application performance / throughput 

Packet send/receive errors as displayed in ifconfig or vendor supplied utility Netstat reports errors/retransmits/reassembly failures Node failures and node integration failures Abnormal cpu consumption attributed to network processing 

Changes 

As explained above, Lost blocks are generally caused by unreliable Private network. This can be caused by a bad patch or faulty network configuration or hardware issue. 

Cause 

In most cases, gc block lost has been attributed to (a) A missing OS patch (b) Bad network card (c) Bad cable (d) Bad switch (e) One of the network settings. 

Oracle Metalink “gc block lost diagnostics [ID 563566.1]” 

Q49. Which three are components of Oracle HA Framework for protecting third-party applications using Oracle Clusterware? 

A. resources 

B. action programs 

C. voting disks 

D. application VIPS 

E. SCAN VIPS 

Answer: A,B,D 

Explanation: 

Oracle Clusterware HA Components 

untitled 

D60488GC11 Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 5 - 4 

Q50. Some new non-ASM shared storage has been made available by the storage administrator, and the Oracle Grid Infrastructure administrator decides to move the voting disks, which do not reside in ASM, to this new non-ASM location. How can this be done? 

A. by running crsctl add css votedisk <path_to_new_location> followed by crsctl delete css –votedisk <path_to_old_location> 

B. by running crsctl replace css votedisk <path_to_old_location,path_to_new_location> 

C. by running srvctl replace css votedisk <path_to_old_location, path_to_new_location> 

D. by running srvctl add css votedisk <path_to_new_location> followed by srvctl delete css votedisk <path_to_old_location> 

Answer:

Explanation: 

Adding, Deleting, or Migrating Voting Disks 

Modifying voting disks that are stored in Oracle ASM To migrate voting disks from Oracle ASM to an alternative storage device, specify the path to the non-Oracle ASM storage device with which you want to replace the Oracle ASM disk group using the following command: $ crsctl replace votedisk path_to_voting_disk You can run this command on any node in the cluster. 

To replace all voting disks not stored in Oracle ASM with voting disks managed by Oracle ASM in an Oracle ASM disk group, run the following command: $ crsctl replace votedisk +asm_disk_group 

Modifying voting disks that are not stored on Oracle ASM: To add one or more voting disks, run the following command, replacing the path_to_voting_disk variable with one or more space-delimited, complete paths to the voting disks you want to add: $ crsctl add css votedisk path_to_voting_disk [...] To replace voting disk A with voting disk B, you must add voting disk B, and then delete voting disk A. To add a new disk and remove the existing disk, run the following command, replacing the path_to_voting_diskB variable with the fully qualified path name of voting disk 

B: 

$ crsctl add css votedisk path_to_voting_diskB -purge 

The -purge option deletes existing voting disks. 

To remove a voting disk, run the following command, specifying one or more space-

delimited, voting disk FUIDs or comma-delimited directory paths to the voting disks you 

want to remove: 

$ crsctl delete css votedisk {FUID | path_to_voting_disk[...]} 

Oracle. Clusterware Administration and Deployment Guide 11g Release 2 (11.2)