4.6 replicaset controller

想系统学习k8s源码,云原生的可以加:mkjnnm

介绍

简介

replicaset controller是kube-controller-manager组件中众多控制器中的一个,是 replicaset 资源对象的控制器,其通过对replicaset、pod 2种资源的监听,当这2种资源发生变化时会触发 replicaset controller 对相应的replicaset对象进行调谐操作,从而完成replicaset期望副本数的调谐,当实际pod的数量未达到预期时创建pod,当实际pod的数量超过预期时删除pod。

replicaset controller主要作用是根据replicaset对象所期望的pod数量与现存pod数量做比较,然后根据比较结果创建/删除pod,最终使得replicaset对象所期望的pod数量与现存pod数量相等。

架构图

replicaset controller的大致组成和处理流程如下图,replicaset controller对pod和replicaset对象注册了event handler,当有事件时,会watch到然后将对应的replicaset对象放入到queue中,然后syncReplicaSet方法为replicaset controller调谐replicaset对象的核心处理逻辑所在,从queue中取出replicaset对象,做调谐处理。

初始化与启动分析

NewReplicaSetController主要是初始化ReplicaSetController,定义replicaset与pod对象的informer,并注册EventHandler-AddFunc、UpdateFunc与DeleteFunc等,用于监听replicaset与pod对象的变动。

// ReplicaSetController is responsible for synchronizing ReplicaSet objects stored
// in the system with actual running pods.
type ReplicaSetController struct {// GroupVersionKind indicates the controller type.// Different instances of this struct may handle different GVKs.// For example, this struct can be used (with adapters) to handle ReplicationController.schema.GroupVersionKindkubeClient clientset.InterfacepodControl controller.PodControlInterfaceeventBroadcaster record.EventBroadcaster// A ReplicaSet is temporarily suspended after creating/deleting these many replicas.// It resumes normal action after observing the watch events for them.burstReplicas int// To allow injection of syncReplicaSet for testing.syncHandler func(ctx context.Context, rsKey string) error// A TTLCache of pod creates/deletes each rc expects to see.expectations *controller.UIDTrackingControllerExpectations// A store of ReplicaSets, populated by the shared informer passed to NewReplicaSetControllerrsLister appslisters.ReplicaSetLister// rsListerSynced returns true if the pod store has been synced at least once.// Added as a member to the struct to allow injection for testing.rsListerSynced cache.InformerSyncedrsIndexer      cache.Indexer// A store of pods, populated by the shared informer passed to NewReplicaSetControllerpodLister corelisters.PodLister// podListerSynced returns true if the pod store has been synced at least once.// Added as a member to the struct to allow injection for testing.podListerSynced cache.InformerSynced// Controllers that need to be syncedqueue workqueue.TypedRateLimitingInterface[string]
}
// NewReplicaSetController configures a replica set controller with the specified event recorder
func NewReplicaSetController(ctx context.Context, rsInformer appsinformers.ReplicaSetInformer, podInformer coreinformers.PodInformer, kubeClient clientset.Interface, burstReplicas int) *ReplicaSetController {logger := klog.FromContext(ctx)eventBroadcaster := record.NewBroadcaster(record.WithContext(ctx))if err := metrics.Register(legacyregistry.Register); err != nil {logger.Error(err, "unable to register metrics")}return NewBaseController(logger, rsInformer, podInformer, kubeClient, burstReplicas,apps.SchemeGroupVersion.WithKind("ReplicaSet"),"replicaset_controller","replicaset",controller.RealPodControl{KubeClient: kubeClient,Recorder:   eventBroadcaster.NewRecorder(scheme.Scheme, v1.EventSource{Component: "replicaset-controller"}),},eventBroadcaster,)
}
// NewBaseController is the implementation of NewReplicaSetController with additional injected
// parameters so that it can also serve as the implementation of NewReplicationController.
func NewBaseController(logger klog.Logger, rsInformer appsinformers.ReplicaSetInformer, podInformer coreinformers.PodInformer, kubeClient clientset.Interface, burstReplicas int,gvk schema.GroupVersionKind, metricOwnerName, queueName string, podControl controller.PodControlInterface, eventBroadcaster record.EventBroadcaster) *ReplicaSetController {rsc := &ReplicaSetController{GroupVersionKind: gvk,kubeClient:       kubeClient,podControl:       podControl,eventBroadcaster: eventBroadcaster,burstReplicas:    burstReplicas,expectations:     controller.NewUIDTrackingControllerExpectations(controller.NewControllerExpectations()),queue: workqueue.NewTypedRateLimitingQueueWithConfig(workqueue.DefaultTypedControllerRateLimiter[string](),workqueue.TypedRateLimitingQueueConfig[string]{Name: queueName},),}rsInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{AddFunc: func(obj interface{}) {rsc.addRS(logger, obj)},UpdateFunc: func(oldObj, newObj interface{}) {rsc.updateRS(logger, oldObj, newObj)},DeleteFunc: func(obj interface{}) {rsc.deleteRS(logger, obj)},})rsInformer.Informer().AddIndexers(cache.Indexers{controllerUIDIndex: func(obj interface{}) ([]string, error) {rs, ok := obj.(*apps.ReplicaSet)if !ok {return []string{}, nil}controllerRef := metav1.GetControllerOf(rs)if controllerRef == nil {return []string{}, nil}return []string{string(controllerRef.UID)}, nil},})rsc.rsIndexer = rsInformer.Informer().GetIndexer()rsc.rsLister = rsInformer.Lister()rsc.rsListerSynced = rsInformer.Informer().HasSyncedpodInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{AddFunc: func(obj interface{}) {rsc.addPod(logger, obj)},// This invokes the ReplicaSet for every pod change, eg: host assignment. Though this might seem like// overkill the most frequent pod update is status, and the associated ReplicaSet will only list from// local storage, so it should be ok.UpdateFunc: func(oldObj, newObj interface{}) {rsc.updatePod(logger, oldObj, newObj)},DeleteFunc: func(obj interface{}) {rsc.deletePod(logger, obj)},})rsc.podLister = podInformer.Lister()rsc.podListerSynced = podInformer.Informer().HasSyncedrsc.syncHandler = rsc.syncReplicaSetreturn rsc
}

queue

queue是replicaset controller做sync操作的关键。当replicaset或pod对象发生改变,其对应的EventHandler会把该对象往queue中加入,而replicaset controller的Run方法中调用的rsc.worker(后面再做分析)会从queue中获取对象并做相应的调谐操作。

queue中存放的对象格式:namespace/name

type ReplicaSetController struct {...// Controllers that need to be syncedqueue workqueue.RateLimitingInterface
}

queue的来源是replicaset与pod对象的EventHandler,下面来一个个分析。

1 rsc.addRS

当发现有新增的replicaset对象,会调用该方法, 将该对象加入queue中。

func (rsc *ReplicaSetController) addRS(logger klog.Logger, obj interface{}) {rs := obj.(*apps.ReplicaSet)logger.V(4).Info("Adding", "replicaSet", klog.KObj(rs))rsc.enqueueRS(rs)
}func (rsc *ReplicaSetController) enqueueRS(rs *apps.ReplicaSet) {key, err := controller.KeyFunc(rs)if err != nil {utilruntime.HandleError(fmt.Errorf("couldn't get key for object %#v: %v", rs, err))return}rsc.queue.Add(key)
}

2 rsc.updateRS

当发现replicaset对象有更改,会调用该方法。

主要逻辑:
(1)如果新旧 replicaset 对象的uid不一致,则调用rsc.deleteRS(rsc.deleteRS在后面分析);
(2)调用rsc.enqueueRS,组装key,将key加入queue。

// callback when RS is updated
func (rsc *ReplicaSetController) updateRS(logger klog.Logger, old, cur interface{}) {oldRS := old.(*apps.ReplicaSet)curRS := cur.(*apps.ReplicaSet)// TODO: make a KEP and fix informers to always call the delete event handler on re-createif curRS.UID != oldRS.UID {key, err := controller.KeyFunc(oldRS)if err != nil {utilruntime.HandleError(fmt.Errorf("couldn't get key for object %#v: %v", oldRS, err))return}rsc.deleteRS(logger, cache.DeletedFinalStateUnknown{Key: key,Obj: oldRS,})}// You might imagine that we only really need to enqueue the// replica set when Spec changes, but it is safer to sync any// time this function is triggered. That way a full informer// resync can requeue any replica set that don't yet have pods// but whose last attempts at creating a pod have failed (since// we don't block on creation of pods) instead of those// replica sets stalling indefinitely. Enqueueing every time// does result in some spurious syncs (like when Status.Replica// is updated and the watch notification from it retriggers// this function), but in general extra resyncs shouldn't be// that bad as ReplicaSets that haven't met expectations yet won't// sync, and all the listing is done using local stores.if *(oldRS.Spec.Replicas) != *(curRS.Spec.Replicas) {logger.V(4).Info("replicaSet updated. Desired pod count change.", "replicaSet", klog.KObj(oldRS), "oldReplicas", *(oldRS.Spec.Replicas), "newReplicas", *(curRS.Spec.Replicas))}rsc.enqueueRS(curRS)
}

3 rsc.deleteRS

当发现replicaset对象被删除,会调用该方法。

主要逻辑:
(1)调用rsc.expectations.DeleteExpectations方法删除该rs的expectations(关于expectations机制,会在后面单独进行分析,这里有个印象就行);
(2)组装key,放入queue中。

func (rsc *ReplicaSetController) deleteRS(logger klog.Logger, obj interface{}) {rs, ok := obj.(*apps.ReplicaSet)if !ok {tombstone, ok := obj.(cache.DeletedFinalStateUnknown)if !ok {utilruntime.HandleError(fmt.Errorf("couldn't get object from tombstone %#v", obj))return}rs, ok = tombstone.Obj.(*apps.ReplicaSet)if !ok {utilruntime.HandleError(fmt.Errorf("tombstone contained object that is not a ReplicaSet %#v", obj))return}}key, err := controller.KeyFunc(rs)if err != nil {utilruntime.HandleError(fmt.Errorf("couldn't get key for object %#v: %v", rs, err))return}logger.V(4).Info("Deleting", "replicaSet", klog.KObj(rs))// Delete expectations for the ReplicaSet so if we create a new one with the same name it starts cleanrsc.expectations.DeleteExpectations(logger, key)rsc.queue.Add(key)
}

4 rsc.addPod

当发现有新增的pod对象,会调用该方法。

主要逻辑:
(1)如果pod的DeletionTimestamp属性不为空,则调用rsc.deletePod(后面再做分析),然后返回;
(2)调用metav1.GetControllerOf获取该pod对象的OwnerReference,并判断该pod是否有上层controller,有则再调用rsc.resolveControllerRef查询该pod所属的replicaset是否存在,不存在则直接返回;
(3)调用rsc.expectations.CreationObserved方法,将该rs的expectations期望创建pod数量减1
(4)组装key,放入queue中。

// When a pod is created, enqueue the replica set that manages it and update its expectations.
func (rsc *ReplicaSetController) addPod(logger klog.Logger, obj interface{}) {pod := obj.(*v1.Pod)if pod.DeletionTimestamp != nil {// on a restart of the controller manager, it's possible a new pod shows up in a state that// is already pending deletion. Prevent the pod from being a creation observation.rsc.deletePod(logger, pod)return}// If it has a ControllerRef, that's all that matters.if controllerRef := metav1.GetControllerOf(pod); controllerRef != nil {rs := rsc.resolveControllerRef(pod.Namespace, controllerRef)if rs == nil {return}rsKey, err := controller.KeyFunc(rs)if err != nil {return}logger.V(4).Info("Pod created", "pod", klog.KObj(pod), "detail", pod)rsc.expectations.CreationObserved(logger, rsKey)rsc.queue.Add(rsKey)return}// Otherwise, it's an orphan. Get a list of all matching ReplicaSets and sync// them to see if anyone wants to adopt it.// DO NOT observe creation because no controller should be waiting for an// orphan.rss := rsc.getPodReplicaSets(pod)if len(rss) == 0 {return}logger.V(4).Info("Orphan Pod created", "pod", klog.KObj(pod), "detail", pod)for _, rs := range rss {rsc.enqueueRS(rs)}
}

5 rsc.updatePod

当发现有pod对象发生更改,会调用该方法。

主要逻辑:
(1)判断新旧pod的ResourceVersion,如一致,代表无变化,直接返回;
(2)如果pod的DeletionTimestamp不为空,则调用rsc.deletePod(后面再做分析),然后返回;
(3)查找对应 rs ,enqueue

// When a pod is updated, figure out what replica set/s manage it and wake them
// up. If the labels of the pod have changed we need to awaken both the old
// and new replica set. old and cur must be *v1.Pod types.
func (rsc *ReplicaSetController) updatePod(logger klog.Logger, old, cur interface{}) {curPod := cur.(*v1.Pod)oldPod := old.(*v1.Pod)if curPod.ResourceVersion == oldPod.ResourceVersion {// Periodic resync will send update events for all known pods.// Two different versions of the same pod will always have different RVs.return}labelChanged := !reflect.DeepEqual(curPod.Labels, oldPod.Labels)if curPod.DeletionTimestamp != nil {// when a pod is deleted gracefully it's deletion timestamp is first modified to reflect a grace period,// and after such time has passed, the kubelet actually deletes it from the store. We receive an update// for modification of the deletion timestamp and expect an rs to create more replicas asap, not wait// until the kubelet actually deletes the pod. This is different from the Phase of a pod changing, because// an rs never initiates a phase change, and so is never asleep waiting for the same.rsc.deletePod(logger, curPod)if labelChanged {// we don't need to check the oldPod.DeletionTimestamp because DeletionTimestamp cannot be unset.rsc.deletePod(logger, oldPod)}return}curControllerRef := metav1.GetControllerOf(curPod)oldControllerRef := metav1.GetControllerOf(oldPod)controllerRefChanged := !reflect.DeepEqual(curControllerRef, oldControllerRef)if controllerRefChanged && oldControllerRef != nil {// The ControllerRef was changed. Sync the old controller, if any.if rs := rsc.resolveControllerRef(oldPod.Namespace, oldControllerRef); rs != nil {rsc.enqueueRS(rs)}}// If it has a ControllerRef, that's all that matters.if curControllerRef != nil {rs := rsc.resolveControllerRef(curPod.Namespace, curControllerRef)if rs == nil {return}logger.V(4).Info("Pod objectMeta updated.", "pod", klog.KObj(oldPod), "oldObjectMeta", oldPod.ObjectMeta, "curObjectMeta", curPod.ObjectMeta)rsc.enqueueRS(rs)// TODO: MinReadySeconds in the Pod will generate an Available condition to be added in// the Pod status which in turn will trigger a requeue of the owning replica set thus// having its status updated with the newly available replica. For now, we can fake the// update by resyncing the controller MinReadySeconds after the it is requeued because// a Pod transitioned to Ready.// Note that this still suffers from #29229, we are just moving the problem one level// "closer" to kubelet (from the deployment to the replica set controller).if !podutil.IsPodReady(oldPod) && podutil.IsPodReady(curPod) && rs.Spec.MinReadySeconds > 0 {logger.V(2).Info("pod will be enqueued after a while for availability check", "duration", rs.Spec.MinReadySeconds, "kind", rsc.Kind, "pod", klog.KObj(oldPod))// Add a second to avoid milliseconds skew in AddAfter.// See https://github.com/kubernetes/kubernetes/issues/39785#issuecomment-279959133 for more info.rsc.enqueueRSAfter(rs, (time.Duration(rs.Spec.MinReadySeconds)*time.Second)+time.Second)}return}// Otherwise, it's an orphan. If anything changed, sync matching controllers// to see if anyone wants to adopt it now.if labelChanged || controllerRefChanged {rss := rsc.getPodReplicaSets(curPod)if len(rss) == 0 {return}logger.V(4).Info("Orphan Pod objectMeta updated.", "pod", klog.KObj(oldPod), "oldObjectMeta", oldPod.ObjectMeta, "curObjectMeta", curPod.ObjectMeta)for _, rs := range rss {rsc.enqueueRS(rs)}}
}

6 rsc.deletePod

当发现有pod对象被删除,会调用该方法。

主要逻辑:
(1)调用metav1.GetControllerOf获取该pod对象的OwnerReference,并判断是否是controller,是则再调用rsc.resolveControllerRef查询该pod所属的replicaset是否存在,不存在则直接返回;
(2)调用rsc.expectations.DeletionObserved方法,将该rs的expectations期望删除pod数量减1(关于expectations机制,会在后面单独进行分析,这里有个印象就行);
(3)组装key,放入queue中。

// When a pod is deleted, enqueue the replica set that manages the pod and update its expectations.
// obj could be an *v1.Pod, or a DeletionFinalStateUnknown marker item.
func (rsc *ReplicaSetController) deletePod(logger klog.Logger, obj interface{}) {pod, ok := obj.(*v1.Pod)// When a delete is dropped, the relist will notice a pod in the store not// in the list, leading to the insertion of a tombstone object which contains// the deleted key/value. Note that this value might be stale. If the pod// changed labels the new ReplicaSet will not be woken up till the periodic resync.if !ok {tombstone, ok := obj.(cache.DeletedFinalStateUnknown)if !ok {utilruntime.HandleError(fmt.Errorf("couldn't get object from tombstone %+v", obj))return}pod, ok = tombstone.Obj.(*v1.Pod)if !ok {utilruntime.HandleError(fmt.Errorf("tombstone contained object that is not a pod %#v", obj))return}}controllerRef := metav1.GetControllerOf(pod)if controllerRef == nil {// No controller should care about orphans being deleted.return}rs := rsc.resolveControllerRef(pod.Namespace, controllerRef)if rs == nil {return}rsKey, err := controller.KeyFunc(rs)if err != nil {utilruntime.HandleError(fmt.Errorf("couldn't get key for object %#v: %v", rs, err))return}logger.V(4).Info("Pod deleted", "delete_by", utilruntime.GetCaller(), "deletion_timestamp", pod.DeletionTimestamp, "pod", klog.KObj(pod))rsc.expectations.DeletionObserved(logger, rsKey, controller.PodKey(pod))rsc.queue.Add(rsKey)
}

启动分析

根据workers的值启动相应数量的goroutine,循环调用rsc.worker,从queue中取出一个key做replicaset资源对象的调谐处理。

// Run begins watching and syncing.
func (rsc *ReplicaSetController) Run(ctx context.Context, workers int) {defer utilruntime.HandleCrash()// Start events processing pipeline.rsc.eventBroadcaster.StartStructuredLogging(3)rsc.eventBroadcaster.StartRecordingToSink(&v1core.EventSinkImpl{Interface: rsc.kubeClient.CoreV1().Events("")})defer rsc.eventBroadcaster.Shutdown()defer rsc.queue.ShutDown()controllerName := strings.ToLower(rsc.Kind)logger := klog.FromContext(ctx)logger.Info("Starting controller", "name", controllerName)defer logger.Info("Shutting down controller", "name", controllerName)if !cache.WaitForNamedCacheSync(rsc.Kind, ctx.Done(), rsc.podListerSynced, rsc.rsListerSynced) {return}for i := 0; i < workers; i++ {go wait.UntilWithContext(ctx, rsc.worker, time.Second)}<-ctx.Done()
}
// worker runs a worker thread that just dequeues items, processes them, and marks them done.
// It enforces that the syncHandler is never invoked concurrently with the same key.
func (rsc *ReplicaSetController) worker(ctx context.Context) {for rsc.processNextWorkItem(ctx) {}
}func (rsc *ReplicaSetController) processNextWorkItem(ctx context.Context) bool {key, quit := rsc.queue.Get()if quit {return false}defer rsc.queue.Done(key)err := rsc.syncHandler(ctx, key)if err == nil {rsc.queue.Forget(key)return true}utilruntime.HandleError(fmt.Errorf("sync %q failed with %v", key, err))rsc.queue.AddRateLimited(key)return true
}

此处的workers参数由startReplicaSetController方法中传入,值为ctx.ComponentConfig.ReplicaSetController.ConcurrentRSSyncs,它的值实际由kube-controller-manager组件的concurrent-replicaset-syncs启动参数决定,当不配置时,默认值设置为5,代表会起5个goroutine来并行处理和调谐队列中的replicaset对象。

下面来看一下kube-controller-manager组件中replicaset controller相关的concurrent-replicaset-syncs启动参数。

// cmd/kube-controller-manager/app/options/replicasetcontroller.go
// ReplicaSetControllerOptions holds the ReplicaSetController options.
type ReplicaSetControllerOptions struct {*replicasetconfig.ReplicaSetControllerConfiguration
}// AddFlags adds flags related to ReplicaSetController for controller manager to the specified FlagSet.
func (o *ReplicaSetControllerOptions) AddFlags(fs *pflag.FlagSet) {if o == nil {return}fs.Int32Var(&o.ConcurrentRSSyncs, "concurrent-replicaset-syncs", o.ConcurrentRSSyncs, "The number of replica sets that are allowed to sync concurrently. Larger number = more responsive replica management, but more CPU (and network) load")
}// ApplyTo fills up ReplicaSetController config with options.
func (o *ReplicaSetControllerOptions) ApplyTo(cfg *replicasetconfig.ReplicaSetControllerConfiguration) error {if o == nil {return nil}cfg.ConcurrentRSSyncs = o.ConcurrentRSSyncsreturn nil
}// Validate checks validation of ReplicaSetControllerOptions.
func (o *ReplicaSetControllerOptions) Validate() []error {if o == nil {return nil}errs := []error{}return errs
}// concurrent-replicaset-syncs参数默认值配置为5。// pkg/controller/apis/config/v1alpha1/register.go
func init() {// We only register manually written functions here. The registration of the// generated functions takes place in the generated files. The separation// makes the code compile even when the generated files are missing.localSchemeBuilder.Register(addDefaultingFuncs)
}// pkg/controller/apis/config/v1alpha1/defaults.go
func addDefaultingFuncs(scheme *kruntime.Scheme) error {return RegisterDefaults(scheme)
}// pkg/controller/apis/config/v1alpha1/zz_generated.defaults.go
func RegisterDefaults(scheme *runtime.Scheme) error {scheme.AddTypeDefaultingFunc(&v1alpha1.KubeControllerManagerConfiguration{}, func(obj interface{}) {SetObjectDefaults_KubeControllerManagerConfiguration(obj.(*v1alpha1.KubeControllerManagerConfiguration))})return nil
}func SetObjectDefaults_KubeControllerManagerConfiguration(in *v1alpha1.KubeControllerManagerConfiguration) {SetDefaults_KubeControllerManagerConfiguration(in)SetDefaults_KubeCloudSharedConfiguration(&in.KubeCloudShared)
}// pkg/controller/apis/config/v1alpha1/defaults.go
func SetDefaults_KubeControllerManagerConfiguration(obj *kubectrlmgrconfigv1alpha1.KubeControllerManagerConfiguration) {...// Use the default RecommendedDefaultReplicaSetControllerConfiguration optionsreplicasetconfigv1alpha1.RecommendedDefaultReplicaSetControllerConfiguration(&obj.ReplicaSetController)...
}// pkg/controller/replicaset/config/v1alpha1/defaults.go
func RecommendedDefaultReplicaSetControllerConfiguration(obj *kubectrlmgrconfigv1alpha1.ReplicaSetControllerConfiguration) {if obj.ConcurrentRSSyncs == 0 {obj.ConcurrentRSSyncs = 5}
}

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://xiahunao.cn/news/3267025.html

如若内容造成侵权/违法违规/事实不符,请联系瞎胡闹网进行投诉反馈,一经查实,立即删除!

相关文章

ueditor跨域问题解决

ueditor解决跨域问题 问题&#xff1a;1.在引用vue-ueditor-wrap后&#xff0c;上传图片和附件出现跨域问题&#xff0c;前端引用了webpack去解决跨域问题&#xff0c;但仍然存在跨域问题&#xff1f; ueditor是百度的富文本&#xff0c;功能较多但资料不够全&#xff0c;因为…

STM32——GPIO(点亮LEDLED闪烁)

一、什么是GPIO&#xff1f; GPIO&#xff08;通用输入输出接口&#xff09;&#xff1a; 1.GPIO 功能概述 GPIO 是通用输入/输出&#xff08;General Purpose I/O&#xff09;的简称&#xff0c;既能当输入口使用&#xff0c;又能当输出口使用。端口&#xff0c;就是元器件…

【C++高阶】哈希之美:探索位图与布隆过滤器的应用之旅

&#x1f4dd;个人主页&#x1f339;&#xff1a;Eternity._ ⏩收录专栏⏪&#xff1a;C “ 登神长阶 ” &#x1f921;往期回顾&#x1f921;&#xff1a;模拟实现unordered 的奥秘 &#x1f339;&#x1f339;期待您的关注 &#x1f339;&#x1f339; ❀哈希应用 &#x1f4…

使用flutter做仪表盘(桌面端)

前言 最近收到一个需求&#xff0c;需要使用flutter 来做一个仪表盘&#xff0c;这可难倒我了&#xff0c;毕竟我是做前端的&#xff0c;flutter 之前接触的也少&#xff0c;但没办法&#xff0c;既然需求有了&#xff0c;也得硬着头皮上了&#xff0c;先来看看做的效果。 一…

Java语言程序设计基础篇_编程练习题*15.32(控制时钟)

*15.32(控制时钟) 修改程序淸单14-21&#xff0c;在类中加入动画。添加两个方法 start() 和 stop() 以启动和停止时钟。编写一个程序&#xff0c;让用户使用 Start 和 Stop 按钮来控制时钟&#xff0c;如图15-36a所示 代码示例&#xff1a;编程练习题15_32ClockPane.java pack…

CVE-2024-39700 (CVSS 9.9):JupyterLab 模板中存在严重漏洞

在广泛使用的 JupyterLab 扩展模板中发现了一个严重漏洞&#xff0c;编号为CVE-2024-39700 。此漏洞可能使攻击者能够在受影响的系统上远程执行代码&#xff0c;从而可能导致大范围入侵和数据泄露。 该漏洞源于在扩展创建过程中选择“测试”选项时自动生成“update-integratio…

blender顶点乱飞的问题解决

初学blender&#xff0c;编辑模式下移动某些顶点&#xff0c;不管是移动还是滑动都会出现定点乱飞的问题&#xff0c;后来才发现是开了吸附工具的原因&#xff01;&#xff01;&#xff01;&#xff01; 像下面这样&#xff0c;其实我只是在Z轴上移动&#xff0c;但是就跑的很…

AvaloniaUI的学习

相关网站 github:https://github.com/AvaloniaUI/Avalonia 官方中文文档&#xff1a;https://docs.avaloniaui.net/zh-Hans/docs/welcome IDE选择 VS2022VSCodeRider 以上三种我都尝试过&#xff0c;体验Rider最好。VS2022的提示功能不好&#xff0c;VSCode太慢&#xff0c…

调度器——DolphinScheduler讲解及安装教程

调度器——DolphinScheduler讲解及安装教程 一&#xff1a;基本讲解 Dolphin Scheduler 1、开源的分布式任务调度系统 2、支持多种任务类型&#xff0c;包括Shell、Spark、Hive等 3、灵活的任务调度功能和友好的Web界面&#xff0c;方便管理和监控任务的执行情况 架构 操作系…

必备神器!三款优秀远程控制电脑软件推荐

嘿&#xff0c;各位职场小伙伴们&#xff0c;今儿个咱们来聊聊个挺实用又带点“科技范儿”的话题——电脑远程控制那点事儿。作为刚踏入职场不久的新人&#xff0c;我深刻体会到&#xff0c;在这信息爆炸的时代&#xff0c;掌握几招远程操作的技能&#xff0c;简直就是给自个儿…

一文详解 JuiceFS 读性能:预读、预取、缓存、FUSE 和对象存储

在高性能计算场景中&#xff0c;往往采用全闪存架构和内核态并行文件系统&#xff0c;以满足性能要求。随着数据规模的增加和分布式系统集群规模的增加&#xff0c;全闪存的高成本和内核客户端的运维复杂性成为主要挑战。 JuiceFS&#xff0c;是一款全用户态的云原生分布式文件…

GPU虚拟化和池化技术解读

GPU虚拟化到池化技术深度分析 在大型模型的推动下&#xff0c;GPU算力的需求日益增长。然而&#xff0c;企业常常受限于有限的GPU卡资源&#xff0c;即使采用虚拟化技术&#xff0c;也难以充分利用或持续使用这些资源。为解决GPU算力资源的不均衡问题&#xff0c;同时推动国产…

Spring源码学习笔记之@Async源码

文章目录 一、简介二、异步任务Async的使用方法2.1、第一步、配置类上加EnableAsync注解2.2、第二步、自定义线程池2.2.1、方法一、不配置自定义线程池使用默认线程池2.2.2、方法二、使用AsyncConfigurer指定线程池2.2.3、方法三、使用自定义的线程池Excutor2.2.4、方法四、使用…

《500 Lines or Less》(5)异步爬虫

https://aosabook.org/en/500L/a-web-crawler-with-asyncio-coroutines.html ——A. Jesse Jiryu Davis and Guido van Rossum 介绍 网络程序消耗的不是计算资源&#xff0c;而是打开许多缓慢的连接&#xff0c;解决此问题的现代方法是异步IO。 本章介绍一个简单的网络爬虫&a…

【Linux】玩转操作系统,深入刨析进程状态与调度机制

目录 1. 进程排队2. 进程状态的表述2.1. 进程状态2.2 运行状态2.3. 阻塞状态2.4. 挂起状态 3. Linux下具体的进程状态3.1. 运行状态R3.2. 可中断睡眠状态S3.3. 不可中断睡眠状态D3.4. 停止状态T3.5. 死亡状态X3.6. 僵尸状态Z 4. 孤儿进程5. 优先级6. Linux的调度与切换6.1. 四个…

单链表的实现和操作

目录 一.前言 二.单链表的定义和结构 三. 单链表的操作 一.前言 线性表的链式表示又称为非顺序映像或链式映像。简而言之&#xff0c;链表可以理解为由指针链连接的n个结点组成的。其中每一个结点包括数据域和指针域。值得注意的是&#xff0c;与顺序表不同&#xff0c;链表中…

手写spring简易版本,让你更好理解spring源码

首先我们要模拟spring&#xff0c;先搞配置文件&#xff0c;并配置bean 创建我们需要的类&#xff0c;beandefito&#xff0c;这个类是用来装解析后的bean&#xff0c;主要三个字段&#xff0c;id&#xff0c;class&#xff0c;scop&#xff0c;对应xml配置的属性 package org…

安科瑞邀您走进2024北京电气设计第44届年会

2024年7月11日&#xff0c;2024建筑电气高峰论坛暨北京电气设计第44届年会在京如期而至&#xff0c;各路精英齐聚一堂&#xff0c;围绕智慧建筑、数据中心、工业厂房配电、储能技术等热点问题展开讨论。安科瑞携企业微电网智慧能源的一站式服务解决方案亮相盛会&#xff0c;尽享…

ssh出现Permission denied(publickey,gssapi-keyex,gssapi-with-mic).

目录 1.报错如下 ​编辑2.编辑sshd配置文件 ​编辑3.解决报错 &#x1f310; 无论你是初学者还是经验丰富的专家&#xff0c;都能在这里找到志同道合的朋友&#xff0c;一起进步&#xff0c;共同探索运维领域的各种挑战和机遇。 1.报错如下 2.编辑sshd配置文件 vim /etc/ss…