次の認定試験に速く合格する!
簡単に認定試験を準備し、学び、そして合格するためにすべてが必要だ。
(A)To be used without any modification or customization
(B)As complex algorithms that require manual compilation
(C)To work only with numerical data instead of textual content
(D)As predefined recipes that guide the generation of language model prompts
(A)It eliminates the need for any training or computational resources.
(B)It allows the LLM to access a larger dataset.
(C)It significantly reduces the latency for each model request.
(D)It provides examples in the prompt to guide the LLM to better performance with no training cost.
(A)Continuously throughout the entire chain execution process.
(B)Only after the output has been generated.
(C)Before user input and after chain execution.
(D)After user input but before chain execution, and again after core logic but before output.
(A)To create numerical representations of text that capture the meaning and relationships between words or phrases
(B)To translate text into a different language
(C)To compress text data into smaller files for storage
(D)To increase the complexity and size of text data
(A)Fine-tuning requires training the entire model on new data, often leading to substantial computational costs, whereas PEFT involves updating only a small subset of parameters, minimizing computational requirements and data needs.
(B)Fine-tuning and PEFT do not involve model modification; they differ only in the type of data used for training, with Fine-tuning requiring labeled data and PEFT using unlabeled data.
(C)PEFT requires replacing the entire model architecture with a new one designed specifically for the new task, making it significantly more data-intensive than Fine-tuning.
(D)Both Fine-tuning and PEFT require the model to be trained from scratch on new data, making them equally data and computationally intensive.
(A)A declarative way to compose chains together using LangChain Expression Language
(B)An older Python library for building Large Language Models
(C)A legacy method for creating chains in LangChain
(D)A programming language used to write documentation for LangChain
(A)Conditioning the model with task-specific instructions or demonstrations
(B)Pretraining the model on a specific domain
(C)Adding more layers to the model
(D)Training the model using reinforcement learning
(A)By excluding transformer layers from the fine-tuning process entirely
(B)By incorporating additional layers to the base model
(C)By allowing updates across all layers of the model
(D)By restricting updates to only a specific group of transformer layers
(A)It controls the randomness of the model's output, affecting its creativity.
(B)It assigns a penalty to frequently occurring tokens to reduce repetitive text.
(C)It determines the maximum number of tokens the model can generate per response.
(D)It specifies a string that tells the model to stop generating more content.
我々は12時間以内ですべてのお問い合わせを答えます。
オンラインサポート時間:( UTC+9 ) 9:00-24:00月曜日から土曜日まで
サポート:現在連絡