Discussion:
[Wien] RAM issues in lapw1 -bands
Coriolan TIUSAN
2018-11-28 12:41:32 UTC
Permalink
Dear wien2k users,

I am running wien 18.2 on Ubuntu 18.04 , installed on a HP station:
64GB,  Intel® Xeon(R) Gold 5118 CPU @ 2.30GHz × 48.

The fortran compiler/math library are ifc and intel mkl library. For
parallel execution I have MPI+SCALAPACK, FFTW.

For parallel execution  (-p options +.machines), I have dimensioned
NMATMAX/NUME according to user guide. Therefore, standard calculations
in SCF loops turn well, without any memory paging issues, about 90% of
physical RAM being used.

However, in supercells, once getting case.vector files,  when
calculating bands (lapw1c -bands  -p) with fine k structure (e.g. above
150-200k on line X-G-X), necessary because I am looking to small Rashba
shifts at metel-insulator interfaces...all available physical memory
plus a huge amount of swap (>100G) are filled/used...

Any suggestion/ideea for overcoming this issue...without adding
additional RAM?

Why in lapw1 -p for selfonsistance memory looks enough while with switch
-band overload memory?

With thanks in advance,

C. Tiusan
Laurence Marks
2018-11-28 12:48:34 UTC
Permalink
Are you using mpi? If so, what flavor?

http://zeus.theochem.tuwien.ac.at/pipermail/wien/2018-November/028824.html

_____
Professor Laurence Marks
"Research is to see what everybody else has seen, and to think what nobody
else has thought", Albert Szent-Gyorgi
www.numis.northwestern.edu

On Wed, Nov 28, 2018, 6:41 AM Coriolan TIUSAN <
Post by Coriolan TIUSAN
Dear wien2k users,
The fortran compiler/math library are ifc and intel mkl library. For
parallel execution I have MPI+SCALAPACK, FFTW.
For parallel execution (-p options +.machines), I have dimensioned
NMATMAX/NUME according to user guide. Therefore, standard calculations
in SCF loops turn well, without any memory paging issues, about 90% of
physical RAM being used.
However, in supercells, once getting case.vector files, when
calculating bands (lapw1c -bands -p) with fine k structure (e.g. above
150-200k on line X-G-X), necessary because I am looking to small Rashba
shifts at metel-insulator interfaces...all available physical memory
plus a huge amount of swap (>100G) are filled/used...
Any suggestion/ideea for overcoming this issue...without adding
additional RAM?
Why in lapw1 -p for selfonsistance memory looks enough while with switch
-band overload memory?
With thanks in advance,
C. Tiusan
_______________________________________________
Wien mailing list
https://urldefense.proofpoint.com/v2/url?u=http-3A__zeus.theochem.tuwien.ac.at_mailman_listinfo_wien&d=DwIGaQ&c=yHlS04HhBraes5BQ9ueu5zKhE7rtNXt_d012z2PA6ws&r=U_T4PL6jwANfAy4rnxTj8IUxm818jnvqKFdqWLwmqg0&m=hqhmBm1xzzLLSWiwCDEM5-88AgPQuAWqpjjkgeQxQ0E&s=Ic1P8IogZ19jRyYW3A9SMMz1JTcScuiHg-_MyQYHgSk&e=
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.mail-2Darchive.com_wien-40zeus.theochem.tuwien.ac.at_index.html&d=DwIGaQ&c=yHlS04HhBraes5BQ9ueu5zKhE7rtNXt_d012z2PA6ws&r=U_T4PL6jwANfAy4rnxTj8IUxm818jnvqKFdqWLwmqg0&m=hqhmBm1xzzLLSWiwCDEM5-88AgPQuAWqpjjkgeQxQ0E&s=Qs6yV3adPUYm3cgM6cqBMDE6gE2U403GUeMJRmCuVp4&e=
Coriolan TIUSAN
2018-11-28 13:05:03 UTC
Permalink
Yes, it   seems to be, as extracted from 'mpiexec --version':

Intel(R) MPI Library for Linux* OS, Version 2019 Build 20180829 (id:
15f5d6c0c)

C. Tiusan
Post by Laurence Marks
Are you using mpi? If so, what flavor?
http://zeus.theochem.tuwien.ac.at/pipermail/wien/2018-November/028824.html
_____
Professor Laurence Marks
"Research is to see what everybody else has seen, and to think what
nobody else has thought", Albert Szent-Gyorgi
www.numis.northwestern.edu <http://www.numis.northwestern.edu>
On Wed, Nov 28, 2018, 6:41 AM Coriolan TIUSAN
Dear wien2k users,
The fortran compiler/math library are ifc and intel mkl library. For
parallel execution I have MPI+SCALAPACK, FFTW.
For parallel execution  (-p options +.machines), I have dimensioned
NMATMAX/NUME according to user guide. Therefore, standard
calculations
in SCF loops turn well, without any memory paging issues, about 90% of
physical RAM being used.
However, in supercells, once getting case.vector files, when
calculating bands (lapw1c -bands  -p) with fine k structure (e.g.
above
150-200k on line X-G-X), necessary because I am looking to small Rashba
shifts at metel-insulator interfaces...all available physical memory
plus a huge amount of swap (>100G) are filled/used...
Any suggestion/ideea for overcoming this issue...without adding
additional RAM?
Why in lapw1 -p for selfonsistance memory looks enough while with switch
-band overload memory?
With thanks in advance,
C. Tiusan
_______________________________________________
Wien mailing list
https://urldefense.proofpoint.com/v2/url?u=http-3A__zeus.theochem.tuwien.ac.at_mailman_listinfo_wien&d=DwIGaQ&c=yHlS04HhBraes5BQ9ueu5zKhE7rtNXt_d012z2PA6ws&r=U_T4PL6jwANfAy4rnxTj8IUxm818jnvqKFdqWLwmqg0&m=hqhmBm1xzzLLSWiwCDEM5-88AgPQuAWqpjjkgeQxQ0E&s=Ic1P8IogZ19jRyYW3A9SMMz1JTcScuiHg-_MyQYHgSk&e=
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.mail-2Darchive.com_wien-40zeus.theochem.tuwien.ac.at_index.html&d=DwIGaQ&c=yHlS04HhBraes5BQ9ueu5zKhE7rtNXt_d012z2PA6ws&r=U_T4PL6jwANfAy4rnxTj8IUxm818jnvqKFdqWLwmqg0&m=hqhmBm1xzzLLSWiwCDEM5-88AgPQuAWqpjjkgeQxQ0E&s=Qs6yV3adPUYm3cgM6cqBMDE6gE2U403GUeMJRmCuVp4&e=
_______________________________________________
Wien mailing list
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
--
__________________________________________________________________
| Prof. Dr. Eng. Habil. Coriolan Viorel TIUSAN |
|--------------------------------------------------------------- |
| |
| Department of Physics and Chemistry |
| Technical University of Cluj-Napoca |
| |
| Center of Superconductivity, Spintronics and Surface Science |
| Str. Memorandumului No. 28, RO-400114 Cluj-Napoca, ROMANIA |
| |
| Tel: +40-264-401-465 Fax: +40-264-592-055 |
| Cell: +40-732-893-750 |
| e-mail: ***@phys.utcluj.ro |
| web: http://www.c4s.utcluj.ro/ |
|_______________________________________________________________ |
| |
| Senior Researcher |
| National Center of Scientific Research - FRANCE |
| e-mail: ***@ijl.nancy-universite.fr |
| web: http://www.c4s.utcluj.ro/webperso/tiusan/welcome.html |
|_________________________________________________________________|
Laurence Marks
2018-11-28 13:24:19 UTC
Permalink
You may have the same memory leak. There is no fix for this at present, but
it can be avoided by using the 2015 impi version which you may be able to
download (I am not sure). Other earlier impi may work, although if you have
an AVX512 machine you might run into another issue
https://www.mail-archive.com/***@zeus.theochem.tuwien.ac.at/msg17779.html
and
https://software.intel.com/en-us/forums/intel-clusters-and-hpc-technology/topic/783793

I can send (off the mailing list) a small package to test.

On Wed, Nov 28, 2018 at 7:05 AM Coriolan TIUSAN <
Post by Coriolan TIUSAN
15f5d6c0c)
C. Tiusan
Are you using mpi? If so, what flavor?
http://zeus.theochem.tuwien.ac.at/pipermail/wien/2018-November/028824.html
<https://urldefense.proofpoint.com/v2/url?u=http-3A__zeus.theochem.tuwien.ac.at_pipermail_wien_2018-2DNovember_028824.html&d=DwMDaQ&c=yHlS04HhBraes5BQ9ueu5zKhE7rtNXt_d012z2PA6ws&r=U_T4PL6jwANfAy4rnxTj8IUxm818jnvqKFdqWLwmqg0&m=Cv0PqMAAybA28wt_pQJvPzRfS7RheSzxhtnpvrjgVpI&s=1dRIdK6RbFCcfCwJ7NQlTgDuG5owif0uY5lrsxx9RLs&e=>
_____
Professor Laurence Marks
"Research is to see what everybody else has seen, and to think what nobody
else has thought", Albert Szent-Gyorgi
www.numis.northwestern.edu
On Wed, Nov 28, 2018, 6:41 AM Coriolan TIUSAN <
Post by Coriolan TIUSAN
Dear wien2k users,
The fortran compiler/math library are ifc and intel mkl library. For
parallel execution I have MPI+SCALAPACK, FFTW.
For parallel execution (-p options +.machines), I have dimensioned
NMATMAX/NUME according to user guide. Therefore, standard calculations
in SCF loops turn well, without any memory paging issues, about 90% of
physical RAM being used.
However, in supercells, once getting case.vector files, when
calculating bands (lapw1c -bands -p) with fine k structure (e.g. above
150-200k on line X-G-X), necessary because I am looking to small Rashba
shifts at metel-insulator interfaces...all available physical memory
plus a huge amount of swap (>100G) are filled/used...
Any suggestion/ideea for overcoming this issue...without adding
additional RAM?
Why in lapw1 -p for selfonsistance memory looks enough while with switch
-band overload memory?
With thanks in advance,
C. Tiusan
_______________________________________________
Wien mailing list
https://urldefense.proofpoint.com/v2/url?u=http-3A__zeus.theochem.tuwien.ac.at_mailman_listinfo_wien&d=DwIGaQ&c=yHlS04HhBraes5BQ9ueu5zKhE7rtNXt_d012z2PA6ws&r=U_T4PL6jwANfAy4rnxTj8IUxm818jnvqKFdqWLwmqg0&m=hqhmBm1xzzLLSWiwCDEM5-88AgPQuAWqpjjkgeQxQ0E&s=Ic1P8IogZ19jRyYW3A9SMMz1JTcScuiHg-_MyQYHgSk&e=
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.mail-2Darchive.com_wien-40zeus.theochem.tuwien.ac.at_index.html&d=DwIGaQ&c=yHlS04HhBraes5BQ9ueu5zKhE7rtNXt_d012z2PA6ws&r=U_T4PL6jwANfAy4rnxTj8IUxm818jnvqKFdqWLwmqg0&m=hqhmBm1xzzLLSWiwCDEM5-88AgPQuAWqpjjkgeQxQ0E&s=Qs6yV3adPUYm3cgM6cqBMDE6gE2U403GUeMJRmCuVp4&e=
_______________________________________________
--
__________________________________________________________________
| Prof. Dr. Eng. Habil. Coriolan Viorel TIUSAN |
|--------------------------------------------------------------- |
| |
| Department of Physics and Chemistry |
| Technical University of Cluj-Napoca |
| |
| Center of Superconductivity, Spintronics and Surface Science |
| Str. Memorandumului No. 28, RO-400114 Cluj-Napoca, ROMANIA |
| |
| Tel: +40-264-401-465 Fax: +40-264-592-055 |
| Cell: +40-732-893-750 |
| web: http://www.c4s.utcluj.ro/ <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.c4s.utcluj.ro_&d=DwMDaQ&c=yHlS04HhBraes5BQ9ueu5zKhE7rtNXt_d012z2PA6ws&r=U_T4PL6jwANfAy4rnxTj8IUxm818jnvqKFdqWLwmqg0&m=Cv0PqMAAybA28wt_pQJvPzRfS7RheSzxhtnpvrjgVpI&s=3goPljp0ym0gsLUEL_SegvZnkGwdP-t0GNZhyDbssQw&e=> |
|_______________________________________________________________ |
| |
| Senior Researcher |
| National Center of Scientific Research - FRANCE |
| web: http://www.c4s.utcluj.ro/webperso/tiusan/welcome.html <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.c4s.utcluj.ro_webperso_tiusan_welcome.html&d=DwMDaQ&c=yHlS04HhBraes5BQ9ueu5zKhE7rtNXt_d012z2PA6ws&r=U_T4PL6jwANfAy4rnxTj8IUxm818jnvqKFdqWLwmqg0&m=Cv0PqMAAybA28wt_pQJvPzRfS7RheSzxhtnpvrjgVpI&s=5yK3RsuCyRtP-esVmrJSnyk-18N3NxSlWdeJ8azgctE&e=> |
|_________________________________________________________________|
--
Professor Laurence Marks
"Research is to see what everybody else has seen, and to think what nobody
else has thought", Albert Szent-Gyorgi
www.numis.northwestern.edu ; Corrosion in 4D: MURI4D.numis.northwestern.edu
Partner of the CFW 100% program for gender equity, www.cfw.org/100-percent
Co-Editor, Acta Cryst A
Coriolan TIUSAN
2018-11-29 12:10:52 UTC
Permalink
Thanks for the suggestion of dividing the band calculation.

Actually, I would like to make a 'zoom' around the Gamma point (for
X-G-X direction) with a resolution of about 0.001 Bohr-1 (to get enough
accuracy for small Rasba splittings, k_0< 0.01 Bohr-1). I guess I could
simply make the 'zoom' calculation?

The .machines, file, having in view that I have only one node (computer)
with 48 available CPUs is:

-------------------------------------

1:localhost:48
granularity:1
extrafine:1
lapw0:localhost:48
dstart:localhost:48
nlvdw:localhost:48

--------------------------------------

For a supercell here attached, I was trying to make a bandstructure
calculations along the X-G-X direction with at least 200 points....which
corresponds to a step of only 0.005 Bohr-1, not enough for Rashba in
same order of magnitude.

For my calculations I get: MATRIX SIZE  2606LOs: 138  RKM= 6.99 and the
RAM of 64Gk is 100% filles plus about 100G of swap...

Beyond all aspects, what I would like to understand is also why in scf
calculation I have no memory 'overload'  FOR 250K POINTS (13 13 1)...
while when running  'lapw1para_mpi -p -band ' the memory issue seem more
dramatic?

If necessary, my struct file is:

------------------

VFeMgO-vid                               s-o calc. M||  1.00 0.00  0.00
P 14
RELA
  5.725872  5.725872 61.131153 90.000000 90.000000 90.000000
ATOM  -1: X=0.50000000 Y=0.50000000 Z=0.01215444
          MULT= 1          ISPLIT= 8
V 1        NPT=  781  R0=.000050000 RMT=   2.18000   Z: 23.00000
LOCAL ROT MATRIX:    1.0000000 0.0000000 0.0000000
                     0.0000000 1.0000000 0.0000000
                     0.0000000 0.0000000 1.0000000
ATOM  -2: X=0.00000000 Y=0.00000000 Z=0.05174176
          MULT= 1          ISPLIT= 8
V 2        NPT=  781  R0=.000050000 RMT=   2.18000   Z: 23.00000
LOCAL ROT MATRIX:    1.0000000 0.0000000 0.0000000
                     0.0000000 1.0000000 0.0000000
                     0.0000000 0.0000000 1.0000000
ATOM  -3: X=0.50000000 Y=0.50000000 Z=0.09885823
          MULT= 1          ISPLIT= 8
V 3        NPT=  781  R0=.000050000 RMT=   2.18000   Z: 23.00000
LOCAL ROT MATRIX:    1.0000000 0.0000000 0.0000000
                     0.0000000 1.0000000 0.0000000
                     0.0000000 0.0000000 1.0000000
ATOM  -4: X=0.00000000 Y=0.00000000 Z=0.13971867
          MULT= 1          ISPLIT= 8
Fe1        NPT=  781  R0=.000050000 RMT=   1.95000   Z: 26.00000
LOCAL ROT MATRIX:    1.0000000 0.0000000 0.0000000
                     0.0000000 1.0000000 0.0000000
                     0.0000000 0.0000000 1.0000000
ATOM  -5: X=0.50000000 Y=0.50000000 Z=0.18164479
          MULT= 1          ISPLIT= 8
Fe2        NPT=  781  R0=.000050000 RMT=   1.95000   Z: 26.00000
LOCAL ROT MATRIX:    1.0000000 0.0000000 0.0000000
                     0.0000000 1.0000000 0.0000000
                     0.0000000 0.0000000 1.0000000
ATOM  -6: X=0.00000000 Y=0.00000000 Z=0.22284885
          MULT= 1          ISPLIT= 8
Fe3        NPT=  781  R0=.000050000 RMT=   1.95000   Z: 26.00000
LOCAL ROT MATRIX:    1.0000000 0.0000000 0.0000000
                     0.0000000 1.0000000 0.0000000
                     0.0000000 0.0000000 1.0000000
ATOM  -7: X=0.50000000 Y=0.50000000 Z=0.26533335
          MULT= 1          ISPLIT= 8
Fe4        NPT=  781  R0=.000050000 RMT=   1.95000   Z: 26.00000
LOCAL ROT MATRIX:    1.0000000 0.0000000 0.0000000
                     0.0000000 1.0000000 0.0000000
                     0.0000000 0.0000000 1.0000000
ATOM  -8: X=0.00000000 Y=0.00000000 Z=0.30245527
          MULT= 1          ISPLIT= 8
Fe5        NPT=  781  R0=.000050000 RMT=   1.95000   Z: 26.00000
LOCAL ROT MATRIX:    1.0000000 0.0000000 0.0000000
                     0.0000000 1.0000000 0.0000000
                     0.0000000 0.0000000 1.0000000
ATOM  -9: X=0.00000000 Y=0.00000000 Z=0.36627712
          MULT= 1          ISPLIT= 8
O 1        NPT=  781  R0=.000100000 RMT=   1.68000   Z: 8.00000
LOCAL ROT MATRIX:    1.0000000 0.0000000 0.0000000
                     0.0000000 1.0000000 0.0000000
                     0.0000000 0.0000000 1.0000000
ATOM -10: X=0.50000000 Y=0.50000000 Z=0.36416415
          MULT= 1          ISPLIT= 8
Mg1        NPT=  781  R0=.000100000 RMT=   1.87000   Z: 12.00000
LOCAL ROT MATRIX:    1.0000000 0.0000000 0.0000000
                     0.0000000 1.0000000 0.0000000
                     0.0000000 0.0000000 1.0000000
ATOM -11: X=0.50000000 Y=0.50000000 Z=0.43034285
          MULT= 1          ISPLIT= 8
O 2        NPT=  781  R0=.000100000 RMT=   1.68000   Z: 8.00000
LOCAL ROT MATRIX:    1.0000000 0.0000000 0.0000000
                     0.0000000 1.0000000 0.0000000
                     0.0000000 0.0000000 1.0000000
ATOM -12: X=0.00000000 Y=0.00000000 Z=0.43127365
          MULT= 1          ISPLIT= 8
Mg2        NPT=  781  R0=.000100000 RMT=   1.87000   Z: 12.00000
LOCAL ROT MATRIX:    1.0000000 0.0000000 0.0000000
                     0.0000000 1.0000000 0.0000000
                     0.0000000 0.0000000 1.0000000
ATOM -13: X=0.00000000 Y=0.00000000 Z=0.49684798
          MULT= 1          ISPLIT= 8
O 3        NPT=  781  R0=.000100000 RMT=   1.68000   Z: 8.00000
LOCAL ROT MATRIX:    1.0000000 0.0000000 0.0000000
                     0.0000000 1.0000000 0.0000000
                     0.0000000 0.0000000 1.0000000
ATOM -14: X=0.50000000 Y=0.50000000 Z=0.49541730
          MULT= 1          ISPLIT= 8
Mg3        NPT=  781  R0=.000100000 RMT=   1.87000   Z: 12.00000
LOCAL ROT MATRIX:    1.0000000 0.0000000 0.0000000
                     0.0000000 1.0000000 0.0000000
                     0.0000000 0.0000000 1.0000000
   4      NUMBER OF SYMMETRY OPERATIONS
-1 0 0 0.00000000
 0 1 0 0.00000000
 0 0 1 0.00000000
       1   A   1 so. oper.  type  orig. index
 1 0 0 0.00000000
 0 1 0 0.00000000
 0 0 1 0.00000000
       2   A   2
-1 0 0 0.00000000
 0-1 0 0.00000000
 0 0 1 0.00000000
       3   B   3
 1 0 0 0.00000000
 0-1 0 0.00000000
 0 0 1 0.00000000
       4   B   4
---------------------------
You never listed your .machines file, nor do we know how many k-points
are in the scf and the bandstructure cases and what the matrix
size(:RKM)/ real/ complex details are.
The memory leakage of intels mpi seems to be very version dependent,
but there's nothing we can do against from the wien2k side.
Besides installing a different mpi version, one could more easily run
the bandstructure in pieces. Simply divide your klist_band file into
several pieces and calculate one after the other.
The resulting case.outputso_1,2,3.. files can simply be concatenated
(cat file1 file2 file3 > file) together.
Post by Coriolan TIUSAN
Dear wien2k users,
The fortran compiler/math library are ifc and intel mkl library. For
parallel execution I have MPI+SCALAPACK, FFTW.
For parallel execution  (-p options +.machines), I have dimensioned
NMATMAX/NUME according to user guide. Therefore, standard
calculations in SCF loops turn well, without any memory paging
issues, about 90% of physical RAM being used.
However, in supercells, once getting case.vector files,  when
calculating bands (lapw1c -bands  -p) with fine k structure (e.g.
above 150-200k on line X-G-X), necessary because I am looking to
small Rashba shifts at metel-insulator interfaces...all available
physical memory plus a huge amount of swap (>100G) are filled/used...
Any suggestion/ideea for overcoming this issue...without adding
additional RAM?
Why in lapw1 -p for selfonsistance memory looks enough while with
switch -band overload memory?
With thanks in advance,
C. Tiusan
_______________________________________________
Wien mailing list
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
--
You never listed your .machines file, nor do we know how many k-points
are in the scf and the bandstructure cases and what the matrix
size(:RKM)/ real/ complex details are.
The memory leakage of intels mpi seems to be very version dependent,
but there's nothing we can do against from the wien2k side.
Besides installing a different mpi version, one could more easily run
the bandstructure in pieces. Simply divide your klist_band file into
several pieces and calculate one after the other.
The resulting case.outputso_1,2,3.. files can simply be concatenated
(cat file1 file2 file3 > file) together.
Post by Coriolan TIUSAN
Dear wien2k users,
The fortran compiler/math library are ifc and intel mkl library. For
parallel execution I have MPI+SCALAPACK, FFTW.
For parallel execution  (-p options +.machines), I have dimensioned
NMATMAX/NUME according to user guide. Therefore, standard
calculations in SCF loops turn well, without any memory paging
issues, about 90% of physical RAM being used.
However, in supercells, once getting case.vector files,  when
calculating bands (lapw1c -bands  -p) with fine k structure (e.g.
above 150-200k on line X-G-X), necessary because I am looking to
small Rashba shifts at metel-insulator interfaces...all available
physical memory plus a huge amount of swap (>100G) are filled/used...
Any suggestion/ideea for overcoming this issue...without adding
additional RAM?
Why in lapw1 -p for selfonsistance memory looks enough while with
switch -band overload memory?
With thanks in advance,
C. Tiusan
_______________________________________________
Wien mailing list
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
--
__________________________________________________________________
| Prof. Dr. Eng. Habil. Coriolan Viorel TIUSAN |
|--------------------------------------------------------------- |
| |
| Department of Physics and Chemistry |
| Technical University of Cluj-Napoca |
| |
| Center of Superconductivity, Spintronics and Surface Science |
| Str. Memorandumului No. 28, RO-400114 Cluj-Napoca, ROMANIA |
| |
| Tel: +40-264-401-465 Fax: +40-264-592-055 |
| Cell: +40-732-893-750 |
| e-mail: ***@phys.utcluj.ro |
| web: http://www.c4s.utcluj.ro/ |
|_______________________________________________________________ |
| |
| Senior Researcher |
| National Center of Scientific Research - FRANCE |
| e-mail: ***@ijl.nancy-universite.fr |
| web: http://www.c4s.utcluj.ro/webperso/tiusan/welcome.html |
|_________________________________________________________________|
Loading...