Topology optimization of Transient heat conduction problem

Please login with a confirmed email address before reporting spam

Hi all! We are dealing with a transient topology optimization problem aiming at minimizing the amplitude of the temperature oscillation, as the following equation shows: where is the final number of the time step, is the spatial average temperature over the boundary at each time step, is the temporal mean of . For the time being, we set 2 global equations with expressions"d(T_ave,t)-T_sp " and "d(Psi,t)-T_sp-withsol('sol2', T_ave,setind(t,'last'))/t_range)^2)", with one time dependent step calculating for the physical field, global equation 1 and another time dependent step for global equation 2 and topology optimization respectively. It works for solving the objective function of the first optimization loop, but since the time dependent solver of the Optimization solver can only call time dependent step 2, the objective value keeps the same and won't be updated. We wonder whether there is method can fix this problem and implement the aforesaid objective function correctly.


1 Reply Last Post 17 dic 2024, 03:49 GMT-5
Kristian Ejlebjærg Jensen COMSOL Employee

Please login with a confirmed email address before reporting spam

Posted: 1 week ago 17 dic 2024, 03:49 GMT-5
Updated: 1 week ago 17 dic 2024, 03:50 GMT-5

Hi

You need gradient based optimization for topology optimization and that does not support use of the withsol() operator in the objective function. You can open the tesla_microvalve_transient_optimization library model and see how it uses an ODE to define a time average objective.

Your case is a bit more difficult, because you need to the time average quantify of a time averaged quantity, so I suggest that you double the time and use separate physics interfaces to perform the two integrations. This would involve:

  1. Add two ODE to your model

  2. Duplicate the Heat Transfer interface (or whatever you use to compute the temperature). Possibly you also have to duplicate all other interfaces, but you need setup the 2nd set of interfaces to start up half way through the simulation, so that you can

  3. Use the first ODE to compute the average temperature of the 1st set of interfaces during the first half of the simulation

  4. Use the 2nd ODE to compute the average deviation of the temperature in the 2nd set of interface from the average just computed, but this would then be done during the 2nd half of the simulation.

It is not exactly pretty, but I think it is your only hope of getting this working. Alternatives could be to.

A: Minimize the average time derivative of the temperature as that would also prevent your temperature from fluctuating too much. Note however that time derivatives cannot be used explicitly in the objective function, but I guess you would be using an ODE to compute the time average and then there would be no problem (you get a warning about it is a problem).

B: Instead of minimizing the difference to the average, you can minimize the difference to a predefined temperature. With a bit of trial and error you can probably get the predefined temperature very close to the average.

I know that it is not what you asked, but I would definitely go for B here.

Best regards,

Kristian E. Jensen

Technical Product Manager, Optimization

Hi You need gradient based optimization for topology optimization and that does not support use of the withsol() operator in the objective function. You can open the tesla_microvalve_transient_optimization library model and see how it uses an ODE to define a time average objective. Your case is a bit more difficult, because you need to the time average quantify of a time averaged quantity, so I suggest that you double the time and use separate physics interfaces to perform the two integrations. This would involve: 1. Add two ODE to your model 2. Duplicate the Heat Transfer interface (or whatever you use to compute the temperature). Possibly you also have to duplicate all other interfaces, but you need setup the 2nd set of interfaces to start up half way through the simulation, so that you can 3. Use the first ODE to compute the average temperature of the 1st set of interfaces during the first half of the simulation 4. Use the 2nd ODE to compute the average deviation of the temperature in the 2nd set of interface from the average just computed, but this would then be done during the 2nd half of the simulation. It is not exactly pretty, but I think it is your only hope of getting this working. Alternatives could be to. A: Minimize the average time derivative of the temperature as that would also prevent your temperature from fluctuating too much. Note however that time derivatives cannot be used explicitly in the objective function, but I guess you would be using an ODE to compute the time average and then there would be no problem (you get a warning about it is a problem). B: Instead of minimizing the difference to the average, you can minimize the difference to a predefined temperature. With a bit of trial and error you can probably get the predefined temperature very close to the average. I know that it is not what you asked, but I would definitely go for B here. Best regards, Kristian E. Jensen Technical Product Manager, Optimization

Reply

Please read the discussion forum rules before posting.

Please log in to post a reply.

Note that while COMSOL employees may participate in the discussion forum, COMSOL® software users who are on-subscription should submit their questions via the Support Center for a more comprehensive response from the Technical Support team.