070-475 無料問題集「Microsoft Design and Implement Big Data Analytics Solutions」

You have a Microsoft Azure SQL data warehouse named DW1.
A department in your company creates an Azure SQL database named DB1. DB1 is a data mart.
Each night, you need to insert new rows Into 9.000 tables in DB1 from changed data in DW1. The solution must minimize costs.
What should you use to move the data from DW1 to DB1, and then to import the changed data to DB1? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
正解:

Explanation

Box 1: Azure Data Factory
Use the Copy Activity in Azure Data Factory to move data to/from Azure SQL Data Warehouse.
Box 2: The BULK INSERT statement
Your company has several thousand sensors deployed.
You have a Microsoft Azure Stream Analytics job that receives two data streams Input1 and Input2 from an Azure event hub. The data streams are portioned by using a column named SensorName. Each sensor is identified by a field named SensorID.
You discover that Input2 is empty occasionally and the data from Input1 is ignored during the processing of the Stream Analytics job.
You need to ensure that the Stream Analytics job always processes the data from Input1.
How should you modify the query? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
正解:

Explanation

Box 1: LEFT OUTER JOIN
LEFT OUTER JOIN specifies that all rows from the left table not meeting the join condition are included in the result set, and output columns from the other table are set to NULL in addition to all rows returned by the inner join.
Box 2: ON I1.SensorID= I2.SensorID
References: https://docs.microsoft.com/en-us/stream-analytics-query/join-azure-stream-analytics
You need to automate the creation of a new Microsoft Azure data factory.
What are three possible technologies that you can use? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point

正解:A、B、D 解答を投票する
解説: (JPNTest メンバーにのみ表示されます)
You have a Microsoft Azure SQL database that contains Personally Identifiable Information (PII).
To mitigate the PII risk, you need to ensure that data is encrypted while the data is at rest. The solution must minimize any changes to front-end applications.
What should you use?

解説: (JPNTest メンバーにのみ表示されます)
You are designing a solution that will use Apache HBase on Microsoft Azure HDInsight.
You need to design the row keys for the database to ensure that client traffic is directed over all of the nodes in the cluster.
What are two possible techniques that you can use? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.

正解:A、C 解答を投票する
解説: (JPNTest メンバーにのみ表示されます)
You have a Microsoft Azure Data Factory pipeline.
You discover that the pipeline fails to execute because data is missing.
You need to rerun the failure in the pipeline.
Which cmdlet should you use?

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the states goals. Some question sets might have more than one correct solution, while the others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You plan to implement a new data warehouse.
You have the following information regarding the data warehouse:
* The first data files for the data warehouse will be available in a few days.
* Most queries that will be executed against the data warehouse are ad-hoc.
* The schemas of data files that will be loaded to the data warehouse change often.
* One month after the planned implementation, the data warehouse will contain 15 TB of data.
You need to recommend a database solution to support the planned implementation.
Solution: You recommend a Microsoft SQL server on a Microsoft Azure virtual machine.
Does this meet the goal?

You deploy a Microsoft Azure SQL database.
You create a job to upload customer data to the database.
You discover that the job cannot connect to the database and fails.
You verify that the database runs successfully in Azure.
You need to run the job successfully.
What should you create?

解説: (JPNTest メンバーにのみ表示されます)
You plan to deploy a Hadoop cluster that includes a Hive installation.
Your company identifies the following requirements for the planned deployment:
* During the creation of the cluster nodes, place JAR files in the clusters.
* Decouple the Hive metastore lifetime from the cluster lifetime.
* Provide anonymous access to the cluster nodes.
You need to identify which technology must be used for each requirement.
Which technology should you identify for each requirement? To answer, drag the appropriate technologies to the correct requirements. Each technology may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
正解:

Explanation
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You plan to deploy a Microsoft Azure SQL data warehouse and a web application.
The data warehouse will ingest 5 TB of data from an on-premises Microsoft SQL Server database daily. The web application will query the data warehouse.
You need to design a solution to ingest data into the data warehouse.
Solution: You use AzCopy to transfer the data as text files from SQL Server to Azure Blob storage, and then you use Azure Data Factory to refresh the data warehouse database.
Does this meet the goal?

弊社を連絡する

我々は12時間以内ですべてのお問い合わせを答えます。

オンラインサポート時間:( UTC+9 ) 9:00-24:00
月曜日から土曜日まで

サポート:現在連絡